var/home/core/zuul-output/0000755000175000017500000000000015137065406014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137101621015466 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000316767415137101464020300 0ustar corecore4|ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs$r.k9GfD -/ 3ꋴN-俪|-ş" ^^^ӟx7՚jW^|1Fbg_>cV*˿mVˋ^<~UWy]L-͗_pU_P|Xûx{AtW~3 _P/&R/xDy~rJ_/*ofXx$%X"LADA@@tgV~.}-+zvy J+WF^i4JpOO pzM6/vs?}fVj6'p~U Pm,UTV̙UΞg\ Ӵ-$}.Uۙއ0* T(-aD~J'`:R߿fKS'oowHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO# #D"-bFg4*%3`C\LtiKgz֝$,:;zuL{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+V 3'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~\EZ~^߹3- o^nN^iL[NۅbbٞE)~IGGAj^3}wy{4ߙouxXkLFyS \zkQumUi_c [Adt:yG "'P8[aNw ȺwZfL6#Ύȟ Kdg?y7| &#)3+o^335R>!5*XCLn* w}ƕHs#FLzsљ Xߛk׹1{,w4ߛn#(vZBΚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMޮi#2j9iݸ6C~z+_Ex$L}*%h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacom_jZJ%PgS!]}[7ߜQZ݇~Y;ufʕ"uZ0EyT0: =XTy-nhI׼&q]#v0nFNV-9JϲdK\D2s&[#bE(mV9ىN廋;ɫߖe1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /癗H7c&)ߊ`z_Z{5v7xniP/CW/uU%fS_((G yKioO'3Pm:mV_g`c46A>hPr0ιӦg q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^7rN_Ŗ׼O>Bߔ)bQ) <4G0 C.iT1EZ{(•uZZg !M)a(!Hw/?R?Q~}5 wY}:fs%1KTAA\cCȾ39h®3uO0T-Oe+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & " ځ9g ?{ j럚Sř>_uw`C}-{C):fUr6gmSΟ1c/n߭!'Y3VQ 7,oTWNJOAqO:rƭĘ DuZ^ To3dEN/} vAK?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb 7ʞXlýcsT gq 0C?41.6-ߜn^ylSO2<쿫lIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKޤIߒ'&YILұ4q6y{&G`%$8Tt ȥ#5GVO2Қ;m#NS80J=l}._?M<lu Y> XH\z:dHElL(uHR0i#q%]!wn<_, n״~* ^g/5n[lGhNU˿ۜ6C9OC7Osnkje*;iΓF[^>؃n糨I@[ tWv Few9J֥WmN^<.eܢMρ'JÖŢո%gQ=p2YaI"&ư%# y}ùXz!bm5uAߙXc90ov,{*9ߎ% qƦat:F=uNvD=mdZB4']a.QO:#'6RE'E3 */HAYk|z|ؾPQgOJӚ:ƞŵ׉5'Hdg;md^6%rd9#_~2:Y`&US tDkQ;>" ؾ:9_))wF|dߗXiTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwGj{VjQS,؃I'[9;|y]?z碨T u]68 QeC Hl @R SFZuU&uRz[2(A1ZK(O5dcXT=D/CdU, lNdV5m$/KFS#02_JNO6¨h=z,ffK2Zu8C1}PcIr.e'.I(NJXywmT{j`R$77.NC49 *#LC ؾzCwS%'m'vn[<73ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w<:X!nV2] e}gjFX@&avF묇cTy^]m -ĖUP-\!3^.Y9[XԦo Έ')Ji̯H4~)([KC&{g6R/wD_tՄ.F+HP'AE; J j"b~䞘Q*Xa>EE衢^}p/:F?}bi0>Oh%\x(bdF"F 'u Qx`j#(g6zƯRo(lџŤnE7^k(|(4s\9#.\r= (mO(f=rWmd'rDZ~;o\mkmB`s ~7!GdјCyEߖs|n|zu0VhI|/{}BC6q>HĜ]Xgy G[Ŷ.|37xo=N4wjDH>:&EOΆ<䧊1v@b&툒f!yO){~%gq~.LK78F#E01g.u7^Ew_lv۠M0}qk:Lx%` urJp)>I(>z`{|puB"8#YkrZ .`h(eek[?̱ՒOOc&!dVzMEHH*V"MC Qؽ1Omsz/v0vȌJBIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k4ؘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b:dw>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<珂k'JE%"2.*""]8yܑ4> >X1 smD) ̙TީXfnOFg㧤[Lo)[fLPBRB+x7{{? ףro_nն-2n6 Ym^]IL'M+;U t>x]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[, qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{a`W%ATevoYFF"4En.O8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGr!oapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧc/"ɭex^k$# $V :]PGszy iuKVMٞM9$1#HR1(7x]mD@0ngd6#eMy"[ ^Q $[d8  i#i8YlsI!2(ȐP'3ޜb6xo^fmIx nf^Lw>"0(HKkD4<Ih9M6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@t;6q6^9.EPHŽ{pN>`cZV yBBz-p^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I+/{6/Ke.Movmڜl8E0ҌlV]dKJc66n.6-|p8 M`18w=P"0l~ ,UIy+}U nnU*D)E\i,2}S RD%d7{k=*Fˠ)zuެueόW<*jDUaj?@}V T콵`_b5Y@N{O b_:,o=H 7͔7ԊQ L${wc3쫺à yuU!Sp&Oi)?'"K':ьqQM^"S"_Le?"Os;GI~2]RmlvI]7EwKЙFbM#t__ c/C=i,Ë,Oz/+4Hz7|r iG 4Xcm, 5焖p0\EŽx(Q5i~[+q㺽hE^_wY9_i='_AW%ò^zUGM6} w;2m5-,@7|=#F |ylOtcsca>uWiRvpJ "~dA#zF0 rƕv֩P uKy;OgƒHrƋ)8hYߣ&Õy}~謌r<;_yNbXO,a⓬ % h+~섄1Fp*kϹxZI`qX No{ @>54lk0OztsmϛxH05c었ϝFcOqU6.gxxↆe{U7|#XͪߋXZ\q6{dA":JF<1Op/,`A/!1b+8.8eM ˾`r*WQGI=;hy֜!,&<[ס [LjІγG$u gJLLyKw ]OWKw=')e8mn [嚧EB d؄U {F6x)8lS ) a&5ޚ}>/`v-|b!4v .0N]Sq8 vG خE>&eZM|R-2npl !a{P74vK3 DIݢ%w>TBdlwe/Y^Ӡ\8ˤl *8Y&Aljd4blJ7Mq!E3'8E8<8l6dg{jO͞\M >56U@(M8IAAw>00gYQL.۞?Ir.N\.-RL:J.%OOR,Ly Ҟ̣l:Ob$(-н ^Ǭm|cܮm׸O[}G8ފgi\ѯ߼{=4.}w@; j ^I5Y#I:Mng7iT^<<@U Qd3AEZK-Ӹ><0ti*h{u~)3J<<A ~zHlFdC%^ ޺`C=ݿdQ3}\MY+r ;j_ h,Oؑ,L` GBFIC"}GN{5,Vh,-ِ:bxS{ZڿDVMY]{8r-0ܔRKUq^;52[! AY^#vgi}ր0oeUK5$HVOK _<Gч_~BѴ* $ mnub9jM}mY͘J,g`v`X1F3#vXVy\1L_қ 5,oƂ1?\լ$oԝ>ZOLucK2A@5nyv΍ P&m<:Yr21S^Xvqu”p&EtLJtUQU<y27A2LB4b"~Z4H"cPu^1&P{T!8dQ ;H9l 4 2}Wyʊv%bJgvrY0P'w6̧ԔARb>a (^ܹ*xkG_]tGw# }#xLq'3:e>@};#FߪKcOU>ݾy wcfk7l4nv+^Ag.ct\:apt~mg6e(ip؎FPU0}"ۑ 6xaHc}v]$b #Bu,$j :Ҁ]k晤Nv[ zk{Io8m</ۍFe2o0J#ã&:w4Cts]q\ᙫom#32lқw o!9@g>]隌`<ؙu&$+w,>wL(`,۱fp3Ed*~^JXߐ+ES0scL󚃀fɢtN{ !ݶL ܓplx*?#CȰ ٻm-W0ݽ߳ގ2ueU;)mɗi2c7,>ܱuYeS +},@+<<ώ}F_;RQl\%cơ|0Eu2?=~35RVilO{ҟƳ F.U{>ķUgU^pR~%er&8}^b C:$#rU7b{ ^|Zӷ+o Pg>{n-A7ES CP<эdc#?iw$gHl/u#0 ~YP8c~ L{E0]_a|'G؁ y&F' a^bw-O Z+I}G/q[Dd@,r<k- +B+n)PpDX ]Sȡ" 2mPؑ$8Ϟ(,R}96 AXV(@8BBH `gO;(Ķ_Q_\^ODS tuz/`q #"Yl,4ODoh  oh+@D?6AH Ab`v$dR5޷p< !A 6sD` rԐ,֯( c`aj?@ql`xh7aF9%PtAD+"xJˡ1! 7] $0gB7dK2*)qWI9@;{x/UԱ\{2w "K5jtcո!3M^MU̖ V Ɋ^2+5G4@o@3lA){`J1 uqYdX3^ cnnsTi.#g5.Hb5ZSQ2ޜ ʃs-dx2FXb`vM f!PE/ e8tu5S~Z.0?^ݦ i@2A4yUryPDY #!sKAT+dVlVJO`%XzrW`zcN}x)\˪~]}IYs">> |YkA//8JP$kypw1P~0TuRw+zUpVП[*u]b4wncWi1<,g  vaWH_®)+mLNA-_tp|[>ˊe#?zI.`m.1t*x`|-WyHe[G(oy"777?H.Hvt8ucsc|ׯ.k4sgG@ȏS?e0^K=(y60hyJ4Oɩ(cw?q 00 aP#{yȅ R(fA>2 R2GJGzl_\ixĸ)[H$`0}&G3PN*}7)[qWxTDa(J"Fl ٖTG.s-CA/J5Si'ꯈ[Ƅ~JT y?e1&83ÚwoskrqKki"pYah3?=Q23XJVTmQ"g c"2ϖƳb֮JgnX8 (-8p'C7Nr"sPd-1( QX(g3xepA8`Ϻga1cwGr?8O?|81,ȞQ fi`l(vak#inyzqpyg{i{`֛*I7UqUg )G^ ; #y_ Hh;̚Uom]leR+n%l3I,c;iZa-5'16ADTME=P="8;m AЈ< X&) yY&÷PN(ہP;qv w N(넊*Gx bBuBuw'} uw ['ہPowB=PoBuBw' w 4X'4؁`wB<`+h!\eGڟ 53"PL7~|:e<{$:wCwE]U?9YWHW9u<[~ѻS~=E9l8zaaP6A8z l#^ #Nx"m Bh3Q>m eP8eB?qYgy\ t[XoL.A0ݲ_}_7}Tw9bTMCܬj2WJ4kaU . aކ9@7 <]a QDEx:7ptЁ$ҷjGѯ&=J}ʢ(SZ]e~(z HǻSG~l)\ p)4aїcu;zʀ\X7VV(>wu|u~y5ec?`pϙ,uY'-'| mqJ^>̧yLzKn Tn P W6\2bőLt#XZ>^9vldWK&KI?LݬACǥphz2Is>.9Ntꡯ/ yA 䯁x($(x 9{ 1a'_80DBOwI;aU 6v2 ~RbgJa1S3>( 魂to#=15xUcl.F\4yLpL(Y-~)y5 Yu5)q3N\"⌊"AdHq[]A)Y2GlpD {L^kv}~#I=qE"B!%p2^([+á1i<%H%2NѪ}T7*5aR̤;=}mBkK(s=Iu3Kf!?g0Y`14^dVdAÈ[1=0'# kdd $߰ku~OlL)M k8l\.ˁr8 s f(+D^7d\FvAux5yd={!X_seRTڿ/$ -tYtԜqO\->.pYh]h"e d2v`j l?H:qhLF*rW{dGe!d7$SM /&$yPẸ#ydԩf'|Kӭ0k?}c[P::Z'/C}DrKK/oNFtYTԼ y-lTف1TyZZ7`-qVc;81pzvءjsR;,ARdn)jq Yyf)-!tLZXA|MFYN@SʹEγ4 Ehf>r 9Qa{ 1[ͷ\ʛ܌>y|L]8D{譫!3%eYxm5ڜ~ťd'LD2,g]BG2oqjKM3FA6tae>~C͞/0:n5P3-RcGeކM9J'E\z\h (y+-R:{uy|pd4  Vk7yv7w |9AVVXBKdV=)#uTU7*!. /UIrfa #TBkJCmx@tKiT@6d:_ɰt<6@XwP%DQzWg_u$[pv/T8G9c#JvZ쿼`j UrhQ$MҒB{GRx[|m|Fd;tQfqn^I?>v ok毽x/۸oNv~?R>@yY?l֫nS_cz<O{A^{qz#7apG˛[t(;g,Bc ߝf^p%NS]773ɂEWg%Tp55 ! djEj&1Q1aYbKaDchv>`08o]Cz0:p{(ky>Ze+Qg:̒A2|*$v"c`íyN8.PQA*_i¼ו(W3G g'th#p.79z՘1/ 焸1e/w$P%]h)1XbiLb+YjpG#G58|{$Xx&V'q8b$]7[,&qCVZM`:h&-*cİG*>Zzo}w8vL[3l7s@`\PZKe&jXD H8%oIpv()wʲYJ3-(Ek)e"Kg Cq\jו`o[Y4b4q1b^vX-+Jj)Vсb ɻ/E\#u6vͲI4݆2`7a+bAs(-le*ˌ͵ADje33-n*> 8F[3a6zn)FH Hby`io r,삣@;jREkӬAbःWvU*i⴪ x\`Pt k-IpcU`mp<\w QGR>K!F>Tz}OxZ6pX-Bm,NU^# @d`&LLEȓp Fi!%FY^'"q9W:*vxXH{(>(9.i$^V]Q!2яt>?ߖ{C\5z4>+%Bc"%+ fCnipA@J|YY$~\!Y%ц+k !2e+Fl#oW{-U7(a"\ ~p2?9oMo:h pϼu}d(Iy|'{YsDsTdVL\hʹBH;t>ݥ{I{]J;YvA~B9 'D^aC)dmKS"qpG)W0ט dmbT41b '4$7lj/E[T V 885Z~) zK UYjcuNvi}^].6{\R-׶RU:㮭O/" ;%ؽE^H ٱ\Y)^r.1m1W 9v$Y%dJpJܡ8KޘDJSĈa#nͪr S !?sn}cYwLY*1alII?)* Vw7*i(|^FJs6Ǡp8b$I.}@#P =VprY׌P̾`$ =4nVMgQMoߗ3 *b`*nV^S_KYaQтyVZ6m;Y]QO]6L0,KTQRI1HO[HB&k1(nғ |G`zսv|P1 Cݎb؄F#yL(BR03#:_aTC. NT2UhUk> n<}c`;hêʞ쒷F fZ VkuA{ё-?7⛵> ~LCt,PrF!-xsf"1{FŒg5dFkл(ݯAY vx!(BfULm@Tҩ?}vDԼM>E'E%d['ܬǻ*m0xu(?mfx"NYMI?CMDAZ UalkIW)[Lp;aʂ+3OR!sI"O{ *eA+f3F3KY }4Zýl 8tg剗 {Y yQo_ێGm?'AΫg^h"6NO(B@˃Ռi3 pƕlX#-tJcO h3@#A,["v#B}!q0ID0_TB|tVH uwHp܃>]Y_u]Ay 2}M1\xoyO|A~w6(]O3=9Ll &0I#u7{$ 4fK>î@HϷ$A9 QϷ8Btڒ{ޚA$X*"i0N$)&`$]# [uxk~˾nɸB_J< b[:gk>كus(;3YE챥"%YڔaEϒw,BlmGH8'q0sd`{f"RL>zV$8E]z 7˘9H9>`9x\3xRL!42 }Pcq O\Ǎ$Hy1N0tTtNh>k&kR1J9r}TR<#ɞH5ZAi81 h㧚{iV'0^WՖ(6? Xn- x.3]BפGRb-c|1Gь{ ~)&df,q u4KZv<-Ip LɦsG8?)c=¤īGQRo%(Ρ\'s0RBrZ4 >&\،BŹm>88Dۙ}6$L":+M-r[1@j{MYkkf"ӭ#q0ĵ2Qtd2Q x?;G d9%̬6CN`C/eAĨK]WN:)|Yz5 oBe<=4MG MEO[O15#)jJ8}iktc>3zpZs _,MbQkwmEp0qK/cZk9xhhulQ5TŐgW8mLZæI3>aA RoM'h- y$0˩&y'iL‚${GnԾvSdgHE$"& &(ʢIpܭxpfi:)W`Х^<;g!=Py1. Dx``0 ݡ3ϊwBD[3If^{=o`{`0kBXXR2-4&R2jiAr`$^a+mJdH 18@j"5$eG3[bSL"%%dzW;:l17:]g1糳u}{Gaz&zaRtfWcWEךβ=LR~R*s Tf֪)7?Mq9K2q߇, 9̋mCq2:zp|nّW @ŵ{Ĕ B+tWH>'f`_'BAx}ʏ+kf/cL'=&'S4 C|O޾n#z12imLƚc/((wWǃd-%_a0mc4k=&dp^C8f8|rp^sY9ie,(XJOG,ݒ:lz ?A|&ۓODٖDDzchJ[o^}82p巭D2P4T`@Q3L.NL=sz#H= M'߅[NL_2"?Yoٙ0 o«`Fa$-4xY0r{*BA45)?Tkv?N1PyΎI*BCY4>I7gOÿl(a#%Uc41;Jd|X U N}71WC&S~c4fI6wU*z(&/?'G /gM‘7wA\:p"/d_g&| ܝw#?<6Z?xWV X,8ɍaaVpJ$^uՙ7H.XIx hPuQGd)}B-4gEœ)^!c:}h)ImIe=gbmMh֪_Wqf1ާ,ݥZQI&Yq،N,[p]]-+=~q:rvrU}V-h*sAJkKvJ¼޹|xU4hX!I팣 vW!tRqs2`dn%WiBunN rm fe쿕/W]|qVY%\D򅟁-|OKSkI/.GHG{;tJ8U 4&7T+M~!Bx~n*ΟUkSH2̶NBW،U} + /%? ,7zȇGqrok߻>w}PaSkqw?os]c,h՟^Ԟ*uzOe+gR["{T9KTΒ73V>JT|"DE ? RA76Lz %yS)('LuDpPI`Lz&0!5v#6,v\X(q|F8HԉW1̔dW xՒ삂Nx"AzOI\ax{j,̭\r=jb \:e0ߴ]١5fokm^[X(O\9ߝvRQ2w5[ w$B˥H-]t.&)=Êv)f>Ff$N1H+7Xŕv@axʍᮜC赢h]pRb}z9vTZ1k-  b!"u5b]/|) lT@hF#iU +#TJ\[z(ϧpXi%TA#ifVt֒5v jøL3vk_.^ +3OD-/iBm3z}r3;0mSKIdwgFVm;y)QP?E?}{[ᗲBtjzInjw"y"ƳHmSH6vvCJ$V$홍`u#\ypcQ{ZDLO}k$0^L\)G\/6\_~Y_A+s4x,qmh7/_h8y$xY~Gޛ ̅\u'`qh7]\[ޤ_?_]!ʊ.]lw8.y`/{HuIaƤH Ly1vc#ЈSX wZ"}hWg1,vTǛf0Ha~ O A׼購5ӂhsߺVt2MR-*4rLSDX 3ZYHB$g17T-mG,m<)U?+Dgl}]%n끘?(_e4* 2L*kiח>aȸ#6 J5B!U+A#^.?ŤR<~k9|t^0PbH3avqiYkc.Lj@U647pZ<)~^o aWjU.r7gAD r@X!#/WfGjU!<$|!_|-%'\=MuM8\d_oJ9k+YFs]a|4ۖ[8ߢ5V[r]y V^[a[auuFL.[*C [Ź[Am_۪Mn z4>nbF}(9>>m=Lӿݯ:W ӝ%_D.sh}˺5jTZE>+uEpnt'#~74۹`N]PE)j?I/|(f mL,8qzW#35|5te;.vTu"DIc܍*.+2͙L19H}vLyv\wv<)2`]f|_9~*&M/Z=Xg\RJ:) &WuY/3,fkdq:9.Ff +1OտIU-!W zɿr~[K($y;is'eXrr(Q}-fBa/wų~(_).| ZkЋ/[CRxÍӑ2+#DGcɃv\Q'62DT2,l*3jvNWBLӉrQj"\u(\Oĥ/TJP月 b#!YÐ o, F,ZHa#·-VKA&fN{i S%E J ]`I14 (ZzwuSCHV(|R"p=ve8L \H u+dŴ^@5Ro%Y;HZ' K,V{\^R C}6 N;iܯIء5LG Mh)O2ڛGMmne @@ !.ȅf~p!3",ru +Ͷ!:.pZ#ֆۀmTF5=6i1z`v(xbe  S>rFH2Q%6R[#()[`U/tkP۰4SUbi0KK҂ӞYOP?h/֠6\4ҦML˲{f+Նse;uWE*vֽ dR5;6N {ClO!BSpYǯ$=>D?y'U\C*A$`!0QyG,:0Th ;OPus1BFS[6Ɩw ѻ'F#AqT &8ʐZC )c3iKUl1|>ϙ3SiexS ^rxQLYPNrh %|Z>N`y XQp!k !טфgJ:0ąA#>8*#'[G% 1%}=] _z~ (g۸񇾸e.5ʌd=)kMD\-׊>l2 Gi:U,bٝ`,Jw3up(E\%Gq?*Vs^=#ͦ:+]#coQ[sʳ#1Ł,f)P5C W'R$5*A$B@+'"EtB'P}Jx[؏Z%j drz5Κr{V"Yed%t20[E_y~S7A&mLd)ƍ TZZ `[U66BZDJyƿkk&mZ)Dt *ݕ5@r^LڊcV(!T_qA9 Gcq$~J6GX={Htamy6z-%˨"U{MM[QDYbC)FqdN#)D`9CR)Gi(ђnZm»VL[A"Q<BB$a ?NcPB4PƘ$ZaS)8BVJ䲞a٢1 )bب8R[ DLEXDu4))FfWUT"̄V1"݅<nb( / qEKU57S`4ۤXZejjլ6y*R4( xp+b=vY o5ʳfTǜX"[DhRLuP0Ici<%n7DqJ F872`/e֬ᥬ,e@jƮ:I5k60P@JRKH*M-YVMmhM@׍*c%-s:z,L"PQZGQZ?PeY5R@-EsKraf&*!z>󭘠`vâEiͼ% {iIjW ҅S3 `ꦕz4ǷmsUQ.T_͙7?ޝZ[WC{_ 3RlmxW%v=@~ڋć^Lܪb%1}ەS&=|+#cW5X5xW!SjܼZ`ٴJ6xzոsp|3 ubtIDmF%oNqbϷe+zo4/xM߯aErg,p XVՕY5۱zaDsN]v:LֵV"*ǟ"]f"Jc%%mzF.o8PZ|a͒*(B- !0 8*Ě uQ(Ϸ䣸dhqkܚM稐%]Hc{MJJBĬyӰV#W 0OgcN2pm5T%V])-rYXU`6 $vP8e ;7}_ON'.ݹ!pf f]7wI\_v(hGH*a0g= :a.DQ^N ~(Cg (s 0(ʞs0 ;Gtg^[wn(72(gXQBhw~|*M!Nw. zX˜\m xh8mM3Iqc7Ct= wktz.Puwཱི77: 

Ki'5&u\t?r/G vU-刨hxRrNaӀK3stlGB[>.Y  GNw<ƏDJ%2BiJEhֽR\l žGyG<@ W;QwSc3ŏo cGYsB+dR'&7LSNcgQ0&nj.PO顏ZMFpZ)=C+Qh N:ңDZ۝s͵:ŔLbì fyvD=&MV"hxeq%)j"7^P21!3(QRo uQ% ÉyGWCbI}J@j79"КC-k27oI< Y. 󕰓#3x].X0qaύ9HH}\'%b=6` To?-X/j4B Q[~'6VQh=+*ӡחebEZPl2܈|;3̂QZW,^'NJ*ԮtF:jy' Uz=GQ1zVㅟ*yx{-5q+[Og7Ωg7@ߙA Ϡ-+z1QLVS-.Y|}44)82֟(13( @^:FFqvV26 0졻|J0r<zLlEQ4.BxKt'Z;4FuUI/*Z\t]|\!:b҅Pv>{w;zW;}gwip\mK^${0vm4• @0`/;{fBUbJ_Lҍ;+v4AvK`zG1:H`Ҏ!9  Lv~O47>U}lfןQsYNvdfD=]()pSom{`$\k0O$S]kQ| :3bpl,PŽLmRA} foN3x)%\r;I0pCp` tw緣ec] 0yO˷ofvt6N]ugY<&g& na,,Hwc}B?Sn S78!FeɨBufɻۗ_|ִM%`|EJIl',3, cdD\goAs6?X3鏝Se4\]tϹ:ςO/Y6Sfף<ʍ@_vIpvcg~ev*Gw-xmMܜ"_oot҅cz(ꢲ0kGͬ4MA LGD$N\hv7}wMnFw|RYBfҭ t}Ox#CM>0*E3I>s:ʽ4v-YԲ<#n:n:Oyܫ段Tw'共Iڱ8E%_Ysjnlc"ޜuQRwdpѳ82*qXB-4+NR MbJ$GX&HE ~<1o^-YD户A v畫A~~&]Jl32Lt0(ΏGAQovL>d|3[޺U;X}ڥѸ}?tE۰(~hq.kȺ_kBgײ|"7ݜvsya7/gkaNq׸E#k3+,"9˓tq<gx<*Wwqj<0Fv63ؽi3uEQuP]t`΅uUؽtgyPS\ r belFo.GVxwwg%+ڵyhK1w.qOKFϠeʭʛp<#>{l]ڋIJ66A)瑛uʰHYՊ qR*NIIH!M gV cg[ٹY~<`=/E|dHF AK maBHk?NS(MEa}]3kc2' |wTmG+S'K! M3[%tbR(j$MޏƵ^;ݵ}MZy0yfge'o!^A߼[Ui)+xmEe7#1pmHl@L = V^FNztZe$68:SXFW.'ijHE FBAǮtPg`R*$aҦ(+eh t;0Nĉ0g6)I &>mbU97JkݪE_ %z׿~*C{|q'{yPoy{*+Kcѯel)w C Q6$_fU3]mޯn^QV<޴n|J "D NTurNuX:SػFvW64 .tMq|lӧbRTr*:"˟$HB{4'N7}0i' 7waa¡Mt3+'\n~gCLސ)w!jwQ_\JhHLFS"Z0ohJ˜(r[ԩ{‘(P6z eI憑 qJ/}SGt1C;G+U|Z#]8ِ J$,G+gGzd!@# A"-ۮvjCWT!2pG|(`x!ŌFAʩqAXqd G)9Ha6L4XДN:Js!G1|:G&;xA"0O],$(샆,'=2cxt Ǒ _| \I1NTx!NRA4/ Yht0yQGFk`#xt lHȵN_7E`\ʙ`@\Lu%h=BׁG>(toT OGT˛sO8? xWsG?T?~~ft\f=4H]> F^;(h{(wl==aF1Ecgt2Rfv={Ìb3"3>dF 瓙y7FW[g?A ~eqPrt KHpVhV-GF 9GrX(&J#~0"(0p8c̠8c+ Z+IIyO@?E 9gA$XMYwO_\`bJ=2j/ΩFY()YvLp0*H-]c̠8JZM0u] x9~~UZfT5'%B^Ȉ/s+I?B߁G>jRx%̄^ªsЈ4%T(` ̘q`N<~oc2{ꈶ)$]SdX:FlX|'I7&i X ,"J*EF֤Gr 8Xvɞ>k'(bld>QvkC>{;^=*=$42}zgaR1rIEMj|vexʹxmg(e3b_ٌѳϓjrO}< ^H 1GBz{Wx2Ё<[Ih^KRF -l3mrbp7E>Ҕg|F zg|?C ]\rlc-ʢCP+yME##;LL6!SC'؄MaIK/ ww/}m5@[Dr7*7\I0l!- Q>:#;$v'vҙ9ɹn;]L$z uDeI!m$$ mJܒK>ۣNi0cjQd ~+%'^Crn1._ZČVǡG*\wT߽ 90_*&qbB~ ŹJ4K@f$2WZVM@Y@k<`prH~XH(Zo]1 -|,39>IcM6foyD['L) a,|%U =2j^n5/|$NwnyRSm #&2h/D&uG1,:hǢ ?x;U>e\q՝5ԒҪ@?dA>Õִ73âGlg+o Rh#FVk1,k1y(ghmu" 3/R`E\}rӥ1oCj^YҌ)IpX2ӆf'T!{2=|МMnnfޑ"Ldvlj9cwisH/*DGF{Y.*x6br(MA_0mL9aȮ@vB [< h? [+6ӅþKB>>v=)f㢐,1uD %x%MMސҝm^'I1c@XB_x$b%isW&.paR 6bp X)s ̩ѵlvwE{F{dgb>0؅(GF~0rѹÔ朲ﮰVפCTTHIB σ!%hMԣ1CDHtc7x*4p:eWSKGDo}fpW-ǂ,D}Ed͜ q KQЎLMQMZ33|,n>c+Whq {dT7)oPΨʹ¦kNr>$,V~ak%}XfrX+N&OX=5$m|cCTP㚀CrڪǛL,T?%E!yMRoc 'l[l#آb;ajTxMX/.7UR cSo˭nvx%jv+\GPnxnx2 !S(<7{~}J%r!ftG桎>iRy\5\FHKM9ag XPcўJ>(q,,1 *'fY\M <邌! n#p/<+~Mʊ7yIRa%̊$iebPGuƷ\n}a]2,HF ,qo. 2ZՋU(k5Aw9F$hryN|h \6E*M v}B5I7D" Q{TN0{dԬ`c匉.9 k t7o|GFťRY+⚄_ZƜGf]7JO|tt<<|vKJT:KX)' 9w|ƸGއuo- s٭bYRPS[oc-O3pUʸbI.- ׺IpdrE>mh-;Ԋ ո|Iy7HT.^32ĘFefSsɘe_?lvHTa˥ql}E>cieKIT+Wz#yz#Txիq>_sGjk=2*~CQ7%4͛Ez5]:xWܠS16_0, :"%լ$Qqy|0y]E>rJqI*ţ!1<5 K) cjBf`|,] *G(]Wx/l?ʀ@u#%i]1皪)!Xx.=v $1ky]" ηġQԯš+줮/~٤0@_Pj|׍C\R%a9qӎK!_ %6 |,j[B՘ÞPuj‚P;t5[tzG^.3<9r儝b[׎jJ75&YZD#CCg|&Yk@HT 2⋴&2=8BсG> xy"q{236}Ox !s)45%Pbe*Dem*)vO=hu8GWB>a焛jI!NIE3=/JCQ'G%|,S*m2e]5g˦[@e`7kjV$9OWy( /zԜ`m=8I(9T8 ஜ'A(򑠂4ϛxP$@o=Ԕ՜}|.y=_EV<̬ȍ+# iZ֋+4/ېlą*88sՔBq~FT>_<|^)"xtQc!׳/ۃz+"+V59YFl^Fu&f &4޻.71X[3aCFHq[݈ۭnDwאG^.'GNWkQGosfKo /n0_l*A|°]QW_t0Os[UVuwz*Uڴd& ;ZA3{"Ǣ+f%۠ (CȨ8'MOY6tC+a 4$7R\[}AibPTXP'l9QD߼(V~#Zin߻ڷ%hT OOGة9 0ۓ x^?9g)#8?\p+Z>-V??~~{M{;wo<*+YE_|1{h;28;{0lf^?)vz vN a0| CM؋$N~A'c;I( O>:&W1=uD۔%-IHt:zVaU0y%|m*qt#m夼'ƒG`6yls!ovL#&i ,Mפh)&X}$r>L%F^;(bvƇlGf܎$ټ<*VŊ!O}7,bW6nY.›CzGvQGݵ8#b,<yH!/ lJ=؞Kղd[KָdWźp! #@2x;xwR hp@Cj&N'=k:ytZFnm3,ZN~_~5zHr`bg 6x>kiw ߾K*Q9;c}hW +59,xPton4ǀ|xR&Sr!&Ճ/|khte$59hRUʸFWdUH/^k&/C< "f~R '􂛋0[{BC]T3v Ƴ,evuӒXT<Wy1f7^EW Qޞ#l+3}@g' }h2~ls| nJ/և /L5G/hb2RܔLZDT~8ks0Gk4zqe$(g9Z/>5 ۿ$ 4Qz<H59LQ@cYu>"s朻*:t?v#X?Eze$9Ý*]YF*gsq ya;$[*`}8ukFƳDqu+D :_yy<~NO E ?:g' sN 1ţ!q휱 [W,7=䏧gx} `0>a):.1r8wR%inJta&j߽eT/&^2pbAR%,})8$sq=ǥy6w+Yқ⾢"[5<Ϣ"aYhv4 7H4ɘ8KKeZy¨K. _rL`x 3p C)R _%}$VZ :<ޥKXk/#-"X+=hJ^dY;FtQQ,q0pB]&g%ூktkXzI8jV 먚 S/8YA(єRkblTa0MtH(y?[{GktM|-7!`=,#VO~{T ^9= ۸JFf-*Dtw ^١QL>W \j4z!evjDۧ}{ق]9ƵrT)j=f@{-C{Usuu#?wsp,8j&8 W_}tfGslסRsߌ9 8];]3@ʭP`FRg!җ+"fG0Do=>5& `dw6Ife=7^ʯOl}htk2pTIW.HxnH"$3BK!cre-V›&P#`P# h$E(;2e䐂H6e~4 ]9?Mkw!j"i! .۹Qk4]`dzTT+KeU؈Jhw(.c̏;Ʀڔ0uaI$'jVu9q hFѿuY켯8b΄5Ĺi3'2ItbHrKʐEտ1Mn`677F9,#3*<X]o \ ;X2Og gjT UF]8Vc]f}}e ~rFڿ3Ds9qvr}RZU.ByfHe5Y<zBT2z{!YFU |0cIóx()rfQ7ʀ_eN0UM`wp7V'@Ÿ}K{2F . jre+3h6+H2xW&A+Ј䨻*%E"t32x+‚R: Bס##<2zq{N TH"3A +UE4i`5#dXHũ.8꾕)n5  h$a}S~z 4)8)G, KẺ}>gyaNM>.m/%4Xʬ-W- B+mN3.IJT#5RET4TQHhL3 6uWATF—\DZP"n7߆x %E?ATsoUl}CfTYi-v^l㦈CUb\,V~)uLsT׃@aFcv-Awu'ǿ?vs?Uϸ;J 2z'=  ^ XP(AUŪs;>AϽb('#x 3x&h:D* GUU"&ZߨF5&<K%m-mEtuͺbK4ȳ߿c'Z~?eϧ_rS%Q}t~:㕰Nr =\^(6aŤyOs< }!v O{}y7 ;2!*QaiDA>N6'jgaqi2ĩx$͠ Q!u O%#I/38mAgc7$|!1 >q$3Q$|"i{}K"E4_:k{\o73Y$&\@B2BWڨy*"B5N2 q&ctI|*;6^l1hR) DF n(IU-UT8|Slýԗth=ĴQ&e9|uށib,pIe^"Ը\⢆:E`E }Q/#ŋi3;46w\+* &DrWᛣst`Glj s{2x TRk1}yh Q0T>^kحcu$d4n*,JdN2dxa:T'RE]2&Z{Tuq|*iyl r\X ?)Sk'O폑?LL'=P:pfj%lU,pM"LE \:d|%3>\Jx_@sN٨p 7Gӈ@~I?ѳzW=N2w\lU/xA-&id1#E㲲}ZTbcu7BO%vujz|8;en7.Ĺt`QHG` Hn㇫9ҭp5'ӂ, D_%qZFW&Rz20Q|"I~=_rCwAMhHCr|*_V/;|"!)fS9:A.euR\[FY#A9ǣ!Zdϛ˨Gqg_PA7G]#>Z= IZgLhXQNf,#.Qsʐ&mE4Iqۤ xHSx15f3xڿ8K<oXE`uVVK$z%ΰ95P5 v⪡Szn*{@>9f33jJ Քnީϳ8OdbowATVAz) }#zOpWTE3(jW7:vϮԘ}~>5Cn$gW=1RGܠ=wj2ph>J wxY3X),7 K#n&D&ټgh!u%aY|!U6I۰ޜɰwD4qTMx;N[NbѕuB~2;l4 F73U5U\ZQW875R5H룠..pQBQ}">F=UKɰm*1{^͌"8*3(z w1QF|*㺢{נ3ۯ$U()"+zA*=ьrI [aѧLgc0N>0]ގ\D/zg $fv1[pw;[d9zrOQ^ ZW[@+P޲i ijbA}4`M)abXϛmQAbp/A(h䤓\pai|5 { c^]vğ~};VKše5ͳWG9#ID80}$Sm#T%-ב GߡQpPU΂v=@5i;~Y\LW@:E-#҆ӊ6D55H@`) Z"] ٤nK26?Nz?i`%R=BG 2)/=#)OÅMnG_i5@[ZSM , ˚ ^QmN|G;j9Y>Dey?M.b RP Z:1VW2EܕJ.."a,޹%0A089kk`c] q9Th|#eoj5aRY$^ZRiQErfaN5R {`;hOTV05{+蒊@: KDb,\ {9 0VvhH!0c\tQ(UC5"}ƻ^,.ڡ`.v@ʚ|S\H@` XE\H^s뚺 X( Y͠]Q&0=-c:qo@2m/U;|za@UrQޥ!ÀNN#Icv( ɟYd sj R21q*iJh 'HQJyM|^)E(4 w@H>i;= AZ)#MLY&R*Z8OͮFYp/^BS;G*n$6BiRoik~n*ji,0t( N.N {FapNQ0\yDX,&&(5wdV:hG|1w=BU!S)|욆TU \cKgC08qV,zVPYHԕף"3:\0&s6Ϻ4Jc\N)E 52f+i/Zʌ>}MBD%8pFk͐KrV򠓘QIOCGy`@Wc{*@t( N: b;Vudds$'IdwQw-d6T6!UŢ!f0m:hG gSl@AQ§ޫ`* BԂ ~ ثtz@tU hܠ8X&sƨtu UT|iPߊU3& Zd3}hǓqHJ7io CiޣSA08&8R6=Ay:h {Q.q2QGF⃃.@rm8 JhO[łG@gk70F2sG C"5-}TpppK q*ITJ7dh޶ol߽u U= GsPa\+0نa|1%8:he N9CfĿQŢ&uvj@ܥN ~ ZB>@=uY`^꭛d ^kR5]j<Ǭcog$̥O'6¬cNVH_[Vr0~nً]a)}#Sz%7ncz%J!Ql+3who\(ϩ߲kh /w( +0#<8A4 ! ϛ;YgȏOLV$?L&d-$e "R=z Š4 !N1"-Z I0E k0l\~gzV=@`z:hS}dlΡ9p; W@9T'Ef BObBZdhodqfzj!X/aԣJQ?*ap+(9b*f?\ש}~:eobњk f/7=L鿴7n.EHq§nۍ+֦A|X)?̪$!n>tӇ(7DE{\V{ @hAF@\S`EU⇣ =,P=uDۺaAuCM5Ée#>EcpZZ *`) Z0uQ!V%Gi T_"sӘROA veEHhƇ]AwtLF"p8+iU2ђh`Zz>4Z]-|?no` p5@> oo9(4iV.aÿG",FQzǛR n}klo^U նP mT)*4Ȩӣ]lC֮S![?LMN._V2fk3 ^ T氼lO㮮n۳j0 cruA-2Įx1Wn f 3rns@0d\(H|7(R5426 ,I~e!z+ F``44>Q 긥ĆFap2 ~>sO .O^ arz¸H핹s`CPj찊CO#QlA0843l>7{;Z6?v()")2zA*=ьr%[#x=jڎn\<^'&^ A!!V?ABzpw(]m~2[KZЈ-V/ 4-d8%*Jـ46홄UIK%bsrݩLV2؂(5jS).K|rNʭniO~~NB7odsQ^[/+ ^ppkg<|o{φwpgmLf)wAϬ =jDpp(Lgs;Q9_ e9FKsg]{܆;,=x 3vSÝBT|W;fP{Z2+$Te|Gqɤ_oI V|&yIWamgNp75O.J8ْI\uIXjamiԖV'V:J:Ie= 'Sub,Ӥ.at@j"~0i~W/yN>w p׹:8eJrenv,.~/%Hcw5[AE;"+7Q[NzGٲ-fcx;>|p\Z=2ӇKws@]Ο \;u7~eא}{ Iw7ޥtk{Z{9Y ?sOG7oG;m wd9{X.VJKTyI|p߹d_.:]mu5s"k3bIF1mM-y.w'p 3c Tf5;ۈ" HAk⌒$44:.qcyF0KdFYmQ‚a':orzz[J7*Jm86z|Ӄ+ _c=F29=գl`E ތv=-&*ʝb)h+'Ƹ7 YGteůJ3naq pNG|xn)އGW\?=xӯqi{I'8zjqWJŀ 04^0 8Q'Q}BuQӯqjt`FFg!҃,KL3uNh枦hpCoQX_ # Pm'>ܪ"y2Yan`&_j9]R-Jz)Eq %B !::p r4϶^JYBșI8nC@pHf$n&ndɧ_[RSjʱ=JǒZE~"AK8tQ rlxG)6"yI5I}{Hf?JDK8q1kSJ/ &2\PBm襝_+x :ԈKnՅ>oDгϨD[ac#\zjFz;Mf&Q^ _7oO`5OzEqdQP/sX{j }^eq&ܵ4{J˰&c\|hT^ k뤑pIvR?z85׿]{{1ګ_o=ێ6^I7#V>RgrvG<0bZZQ"=7#Q˹ߕ!1T-tpw >_C:דz -ͻTݥynnFR{2q-)ًyY^̆ }tÉp0=5'P&sÖM*+4톻PٞvC桔ԆPw!IzE1Fы2 ֳ@z}DPRj,20^098R0cb;BiLN*k2sU'g}-ִ"4kĨ2|۽ gCeb[G9t6:-9;<Uzx-ebNj'dkK'yrhBmk@lxdW=55<ŨP\0*`dDprF ˉi,0#G>SveL^dJə#ȣ/=9|{oTnxhŊsAF~":9t>ɕEN#)_HZ5\')U^xr|3̩*3O'gMϜ*NF[#IزNВ"(">#29YM\9g#D'g֦t>pXBV[nDsUHKD 'fL+OϜN_%ȰPG8ib(c_!M 2qsd3ad{Wje|,G%UY8BF0X쌲`[V<Bc0`@T,soQHKL0mG(|?v鯫<uw9V& ԧ~}< FL# tZ@Gg?9e{| C%Yh#DO;M9Rk@]/gA5Ū!e9z vج/c99l}nn- }8fmb n^8^˭2 MS{sg7BjCuun.`BSL߆̀c&"@4u^YY7Zލ-`C($$ hQ1ʫuxpJ[sH/b0*|ܫ|Ѩ,T4c؃ u@vA(08,"L끉ķG&ns$ IVaZ2a^ctTv?*a:q0.olhl{=EXJ#JE 5 o_gbpT} ȗk8XOg5 ^GK/x@PZSLe57pu_UW2QW ؄c:!\UyտʣXB$j#9U*]cc:oU|J3zUiI'VJcDB/ѢM ƄQ Ec$e ˒xY]ri|x777݃K@WsJ`Z4&PE&ZLj2zԝ"rX`8 RHZ$ dBp,;^l@;O밴+Iuv]#9J99<'d3ƮUbJn3j7`ueV 0JQ|Ţjt8ta%чT{m,%>L^ OYچ+kcw8)tJO}.0MorXVW WB-1c(a%Z7Rpt&kbl;-CVJ}JSG)˝dRS㽟!HHf4-´99ڮ{v&_^NeBys;՛g<\t80`hw\ :H Qݕp(.cPv\vz6ιch5,5ўLmc5\1'oVVf4=LT?=<'zUHѧ5pʆ $ A??DW=d݋xfnk{Fznl?EXmm1CÝ{~:|6/5wxgz5ºɯՂ'^Vn3TܩN>deټ=n 1TL|vk*H*wRiwOh!i4|KGXC[}BrM_}<S2D_ބD^Lll'ѧ97 ڊ3HZk\hfB~:m^|yp8h#hE 6lz ^X7M;r)0DQ->R:RV֣{Z<8pY7㻷/0sN;2ܩ:48_\Tׯ뜪QddY^PZ}lq$ogki23$, ,k|,wgM!xYrτ #αEYq:%*=x`EaBR54du戀 <5 PH^7&|%;IHA& 3{OL؎9C!xL ; $ k5#9 Y1U>ByJ45+rAc#sPڔ^*A!xM1I !IRfHS!C/,} $H$A!x#` co)d}FcVn#=Yݤ K(1e@j EɢWs QBi[o HC %O2kքgNpa1`WoyfX;rZ:#`sLgJ=K(=(nsP %+^c{*AG.zwxSr_noTOCV:TӘvjD;5B;UcHHj>B 29xP,cylaLQ29ͥ I73֧ϙc \ HpI"1RĈ๔L(7v PP&Nu Ikz%&k )KeruwSH^@/`Z DZ2a)g1ʨt])$xr:GPH^ ZcrC!xCdj=/8W6Kw"@2J0Uy|6|DF7X3ˆ†x((B3Ĥ R8c<ɷ[E@)4'$ eJIa9wö:1x)g9r,R: ˕n# )SS|LqRg̨a$41(Y?°cx||GxAw_O-ﲀcP̦7M Jm%e`VpTBJ &E?[TsW!;~v%{q˘Vѣrf edJۓ1ɓۓ/j{rCVzp.[[bwF&{jw~vA ޽ߔꭲ1n EؖqjISSfvCZsH2S|f VTR&]1 |7MUY=W#FAJ[ )kRZjuJAB + +4: /i'fIm@}9A?x1K~&PQpwBr\gia--,WXH ~1R?nZ*u|ትᐽus>B;.ʝS;ǔʹ|b  8pgڂt} OoQ\-1"IK[>B \ ^F+rxF2dpYM\uc(b$#4 S*P\ MBk1RTsKG Dx"UAQ]b($uExnq]j%Υ:-XA:hˣ FkJ]힠ΐ48厎Q3jg4G0[=BGGdt[h2/Om {9dn^}ֈh]PS.3R,7YVpxa[!2B)b'bǸqir_Vg?{Ƒl@_rsC!( B?%^S$#R[=CR!H:kOUWA@kY+7_;Z7WYOoF.KS-Mm yaTb$A8H@h"  :ĹTŬHOH˗!>DBKBK8hb\1̊r2Lه1ڇ WGTΜE+Ӫ;gK*OG9 jpaC?'D촸#3[V`$:JDp*&w.`t.7\P{@066`M@ZX.*lnT-o` l~$(@8]wcIM}bᦘj*:~~x؟]NKb\q:iSPU)kWkЯ лΡJ'itQ?^|4CZ9 Mn].Wh}p՘Ki0E=Tߎ튫GgU-wF`4jb51˦jHc5_Xͪ0WӸ>(1Mf1:ٖ19׺dSM}[IБT1a|R01$&4]q:Djc5#nUheh rfxh_./')0"!%+ǃ'^IQ/yrݼUoV[*@RUwq0on0%w'go_}>O~闷gGoN0Qg?>;y{wz Mp&_{Ux}ՄwU5Է*֢z ^M.7Ýz ⶿>OӴ4]#̀ j 䙮NNk[V&l9sf\QiaIsƫѕpQLhyd iPθAf+穭bY[twM0HNXalwZ:doe3;~஌L$ :xAa o K E2уd we:ٕdOFL<B~oƐ6ۃ73B}'6Ga˔~FՖpcĝN $'>lDunGT;T*+ʩhM ]G%6; 52pJJs RGqv@Ѧ.:8PXӺd (Nl5:aڑѦ.:ID@C&& &2%֞AmJsJr%2R(K+\D :ߦ:C]Gږ ,KAmcJG;tM t^wN iUbثR6̔j!̬oQBǼs6RW!AZ[x8jvd})s7B#h.ТB03w@Ѧ.:fFsxl:ES_*'Tܲ4s]⼃hS2`KyrH8X *S^(ʋ.s 0AUp;V1q3@ct) aJ3E7D%p\47Iky2>՟=hAlhf_V5öʫbH"ʾ|sDbdj2h$%Z@^"_jYTb 03{@To~}.]{Ħ9|ftb KiF<cTGCzP>ʙ*N0Eͮ#skx:CzeS{/`'iET("@Ybz|`{ɌKjadm&,{jLsOõ84ZūoFLN!yEZ A^c3R[9 s*c44ͅH0w?_ӻ:mvm[-n0V/x^_ϰ1=(j8tv;2xoH9~C R*LӠq1Z-VS[9 ,+2$aՖ>&=,Txɖgv]i:xAa o K E2уd we:ٕdOFL<B~oƐ6ۃe73Oe}'6u˔~FՖpcНN $mR\zE^V1AW?QJpi^}Z'U g={4jJt>"F-p|E>"n\N%p \pL.ȇ[-@HqLr6eΦٔ92gSlʜMSVr6eٔ92Φ|iqƠ$y"ùHB{iiE:GSu:*'aIeEVrccy911<Ɯǘs c|4 r2f6r2fN̛:93'cd̿bxAXhqx1֕TNQe asA?{WƑ_ywG ^ٻ dׯcsC I]Ygx G emcꪧB=Sc"fZy0.PH83{ 9k:a˓SũNs#ۉָd]V־M]8&B[wϳ@X~=j`wn$gxlq.`Y&AYIm$ wV)bׄb-1٣cz^٦N,ƕFijPFYw>9`RSظyCӌ y]D:>3'♡pXҙd&#NZaŹ+#u5QuBgޠ+*sL9BI HI S"!&G;uw0l,iA8}F QT ov{sx# cmXgt3,08ۢoe?x[ ːV Ap[&%=h .ƭY#XooU;W/|4ʼnn~dϙf4Ǫ1b6";m~~}}-l:Y-RMAYKQ?3aQttM%:6Fr sL(RmC9YsErdq),H!I<`=uSNv8:aXjNA>l U_yLU_wa4@{GW}Hw~؇#ΒXSį;& X1@O)P%^%kXK6֒d#i  - .ĸ rlp|;У@s1U׳Y Ur? BAʥۓiC j򧟆߯ؽ*(62>3N|g<5LY&S#K1}x*Jz;V{k@Z ax[f'd"}xY nEɪ¼Բ#g4ޞ(*$:h/[/r`54@˙ KńO9ńvD,m4q I}5gɏ#P٤8\ĵ΀U RՕwr͟ = a{]SWsn6O볤H;%k.77T.\^7I/rd9SgŻ⿗ I*[,wK> Q^ /:=ĨiD;2jrn R&iïf%Oq{E|2ʂH=P[\)AV+E[Bbn|3~wCVZsTj򬖼Td1I{9{g?w9}j[jo>1 sԤ3="&ARJ~9I6*DO8\]"l X@S`JBN`) x[d9=hs[  cDtԑ:QG:HG_űb ʹb>YH. z`'s.G?dt5ǭzs5.!הO]<6ښF`>L:¬w9g6zh8 (][EG9sbSAvkkkCdqY@GƏeDBY]V{*Z+Q Ai˨]'?(֪Hj< j4?(k|8Jfr5Yh&eY/X׿lyiו=ơjR In݃ J{()ӡjtzk]-еo q^q^q^q^V9GTkY<q =ϥ…G8.q)/8'YKͻ"﵇+n_uUm&akX.nnAmnF tw7t ~va& 6 $~Hє`&L0 I)Yʌ6.OCJɝ% lXMmG?K&b4`k}͕jIk|$)ԤKZn)&/֨VW$! ;)V+l9q̶bvz7…Nv6H`0 j=À9>\Énh6o~bHKMDRa)wWNDCM(V[eđfy|۶vp(F,M۸1gK$bk$f(w4L+^M8`TLUh|P?U=(Eήik^C"-u4-+Ŕ\5jIv)ztk-y!&\ ]:8ɾvt -0w_h󅔫ِiS?ڠҤpXV1+Lq/)&Fg;˹b iV4zYbҰ7RәdqFV&gJ!|ψު2dV$RMZGG[4VR,*)$K3$"B̛:5Ct'8cyj+5z84ԮcRwwQww03Y[5ۅ'\ w,iDFm.ɬβas%&d2*E"+¥{Lr0)mQ \KLDƹȱnkpsAGU׈eMxI9DK9"[Fj>CH(Ɍ&XYe8˵g 9$xECY؅ gHinTDVafX4vKXB$YJ?=v= ;/f`bm{+p \`f%u 1%Xq.Ma&јc)Ϣ~,d!"a*2+ja!wNE8uO++-r< ݅Tw0GAxaW#e 9Nib=P( sĹΨZw?:clϒ-,B PcfoU37NPB2ՎcE0VqUp]<!H3`#l>JA)lt꯲tt:oZ όGZdT~ww$Ew=Ld^a/l^RߏL(WPdl #R|f[P;]ySb\qgiSC+k5'W m? GQiDrr~ F]K%wtU maRLUY*v~8*Ө,sh8_Ltf?{`p[] պ&M_0hbã0`NF ·J|hviR!нOmH]CSqWBi+ hrUDO(zvz`Oǟ`|wo/|6|{KL_/.}W YY9 `?!s/~h4m5 -5h6G=&6\6%xUYEEҎym3FM0aŃu4 C|'Ir D:/[T@JrxQxL-:Syie$aa]b98;fOԑ",+,AR δ3z$EʹrQVxʥϮomcZӉa~붶QY^Wn7rau\д1δW_FYt^65 Q/*q[N>!BV0uK(;3{ `9k:aOSFXpa'@sI_ָXJ^־M{8B[wϳPt%cr>m>BgzA+<'?c_]sB{H$\9OOt+6ͭfM4ͬGy({N7 ;Vp4yNNV X(Ǔf˕ֳ3+𺘥[2I2&ǎUM ck캖=T~8@s\'EG*G̠8vG=mAaAKsL >}Dv3a7~jm=!'cjVr1P{Ժ8ujTl4X$/ponc~FL4bڎ-Z9S&3qkҌûei:Ax$;r?km<ǓS˾g'-)~f?ll=d\W#l>kknڧݦEWf:3y؞I*VZzÏNŏOt*S6yaXM_" =2ݖ6V\j^*ĚSo":Y5go. #_=~V ć$^>>on6~Q!N"mbk=/꬏ P[_)xrj*@n٪r8.SZrtJ+l6ro~u(_5Ghrd?}K+<'%^Ԇ{3)c.ΧS+a{ O_ͣ!Gp?̞|C7}3en||s=zB9z9 >)?W nzIL ړQt %ۦ=<815S}d؊; ?Щ  rR5g߀jWިvCPL`fXeVcVE{b糇ڬЌX0cUXuT*˨1{;OSYgDH`Qj\~sLhprIʔCeWV}%5YOy*4m&8亜[G6h~W:u=SYW7τTkL5>g_v%9jB8eLA((E~}9ZziG@ჳkЮ |T"#3/)2) BKpɄ `D xH4tz`"D !0 1( H(m‍hH;訳r.ͽwc3\X/e,qJ4}^D@ЛW_ I`H(2EJRzQ m6ݹU)NVžʁ!cQ8t0YCkNs7ڠA"h]N6z[o1խ|q[nW~M4W9`p[n(ofބ7(;gK[';~XLr??BO>Dv6߁ywㇹם<G%3^6p!…ó[Ǔ| ՌH!(2Iԁ8gJUb1ױ)D]b&}XG itb IeBX Ŗba@ 4e?(K+W7K;wҕ}ssDugMOԗ. "OY>*lto_;L6%BXaKt sJp=p{?f]VN(lCCaܽq̯#::-״(,l !H܀ },L. ].E_3VM2G @z5}󝭤=MTڷ\K+dJsن<DVAK*u!ܦ`8yQ gTeo^YL!)Ҕb EH4CjJ .ee_ L7H}ylXu= \[^՟'ƽH u!JbSJA !4Kn\ Z F F)UBmLQ4?~8^c{kqAY2S:6ĄX,r7&bm,$w1NH̉a!&F+B82Μd"9tX$ 2 4Hq`\ZQSy8.u-תNOX1=qBʗnf>p AgJ~8tt&tDZǓ"U7q4e&5Eow$B29fB nO4/ ,TՀ1'm Ou# s ܙyE _7',}n>sr7 ʔW~Λ(4~xpUR v[K2tvP{ޕ }lK' .W0j uiT}geyin]w?Fe2kb#=qo&_g(.O :6[KUd\Kɠ&_X5.F 6E8%pL'0UWq:p9*gIJc,6'! b^qXPzzIbbzɰ}hkt?;UٻI.{]VƒS/~ F][TQ,s'ny=]Yz.W{TVçq3\7۽ERye \l٠7\s>탂l*Ύ7wDd|sͭLUfLj7Ϡ٠by>܈ ݡW3+֠'V煡A'dy1ޫ ]ARt=y^nv+`.m,Io|#B7IS>7wⅣhxLP9r=#IYJmnK{z1{e{TuXW9%sv[JRLx^4D>h鎫&nRYAtNRU/]L;6q|q-t6 el<8lZ\Cǎo`O֬zpY18>p5Y8\*s 30|joo DG:пC;7ukVxWVpo \-ə/^[z@Pwk ٤$y5 `RpZ;ئz $.L!Z] q_P~x^cxn 8DcH 46>6/uy(UN ]c:ٟ5O:VqR3V{~VW,wŋVP4fW\m)XucO@C#[Na^z ד|J]Hq8@PF,dߚح<Ʋ+u{C6x/mE6G^zc; VyEVjGWj dt60zfb^3kQnV}q,Vu%@Dת^ O뉣\85Y6-AIrAMVd6]:9 SV15M Fw&+'\ 0}i+[x_LϡvGqеJe7^Z)l-r[bGf -52$RZ2^3 m<2]ۭ㮗O"9 |$_q9Mo i 4"qJtSl ȌDH3Hv)C]o (_u%Jrϓ &@PmPa=d0ɰ,yA d8N ;KI$ q}8I,ң@v?Lj2@Pص;Hz/GVLA ^q'ˏWVjaҤk4~!s>q/3yg pF&G4Z.S3n~[)=B?OfHaO@z;ҩ,/{kPN_J{ыIŌ'4Lfyɇs304N@_Mܬ>.7φiyMCoq?JC(DQX `P*4qkiѢ z5b0?;?xs3g~~fX-X];ƹSdMm1Q6G&S;>4Y4wygYFD " K}ʦ:}Bdm.^&kS꽖"qkUR{.>B.7^שY:-*gU>5K_$]бjd(u9v׾lR~>[,'|h~0e:Y3 C3s5\__ʟ7]<^UCj'?_Q^^*.޵9s!wrC}U~_5}6STfo|yf5_VwՄSʲ轋J|ߎF nhb=HrߐCll"/c;}}!?>:6_n68/G3,Wi}$vBpzPkh{jlGX#BÍ FyB8dyt/=Z- 8v^0Ugp&L'+hMI,`$@ !mAϦYEW+hrTEC5+XkZ6Pm ̿1/VG itb Iu(b@-c>Rg8W)sAFY,<:VoW}\\ZcAe7GuhH"qC KBl'g>uuTaHoXuU&{ioVcg̬=_:&V`1OdO2@ YYCS"-!b-)r$v7wޣ8˄VSsB8Val1Ŵi'a{@<YX?J) &̪= mc=*4g9!z!$Re>tBne@o锡%Ggbc1+oq~/s`eL\ LAASRIQ"ȡp &G˓DÙڢ1d-D퇌)_Ntљ=:R;|aߍ9$I'χnAA/ 9tU H׏K,!p Įp*ͤBX cC*1Ʀ]yUˀ4ɧ7SNY(DHE O8Cd_LrؽDfG7<Fٶzā{s 9[(h+STDB2d*2ׄ\,0x!"s#ZBHvo\HT X~қW$w _]~(-'{Qd[_> 6a 2ʽbPL L(}R`7%A9"%3vp +Ԛ)c$pw`p,8eTH)S.hCS0$Ts8_#/[gWdლ19Qdzs.Ly%]-en^6UDjnowMTOEŪ@8zWeiD{w>Ӷ $SD'S:PI3D1bHƙQ$4M%ʒ-8" NxQe^!ts/q<*2%O^h`Zx4PaE#7)Ո""De-ۊGc Pz%HKy8Zd]B:oi&Br"jo}4i@@&˱T 9b@ }߷R"!=*x 2m!(O Jp G$ړbEfBvJ,*"HI bEp7~PG#J()Ppɮͤ\g~3Oqɿe/ۻ{9B ? Уy%{טQf\t東~(ǿȆ~Ȅ;Ø=?7e`θDx?qw^X|(8#3 d a4nٕVtfGxzg137(@J=Q Ɯ&40V.7Avei_]fԝa2~\mbwcڒOt~i:^I..7狗 A 7:)g/o3i@ w'sA*[ ӝٺKS'w~.op7^x(=Z"HaOly8#Q?z7.om~Oj&Q=gUN#&-pN,CJ#~~vУ97 V9F6:dլU{HeKHX"e0yh$ AdruNhg0ϬTsTίw_~#JdWD< e fӯtÛRYrU?^ [1=;b?kD~廷?.|޽ .?|;|/Yﶀ]?L65571$]Oμ-Wya&$؇HPJT9~Mp/6GӜtJR>a+IA mHtk|Y-񢊔C![Y֭Ou$aF]}Jmly"Wy|hcԙ`4hN( u) G3+h:8uFq|w#VV=L,H=3Êv"'qgg=׵3㺞fM+{Ey(WƑl:бÄgJ(!xxK 5%EVdSta`VFÄmo 68dF;Y  {'_J[DլX̀ˣɊ޾̥/s\zrbVRʂR@[ +B}਷qT;͵!Hbwh]'mRBG 2#Rr/mOOE={2~\WdħyN&xGM$Y8B4D;"vC6h" ?;fc3GjˇZ%ZC5W{C橽)K|0&)8J߆hdh]g4: OnV7>:0Sjmseʮ7TlGVb2}Z+ÞvYkE8AŋFɛa=l=NzkOh [nVzY($bWxm*wG{EopuɊG]Ip|:}\g̨$ߞr_}_3 ٝShflIZnx1 i1QxK6f&Aua{S](DG{^T*EXNg190/h4 EUP!P?p924p2Ǟf8ɰӬ"NZbIfS'55j=Cކ ' F<|BN84̈:}E1#&#uq,Ab_vB! YBʫq2 c !,\[8%5 %oA1ȸ\iO hLf'`$VCc4H=DNIh/5=rjv|Me&Y RjC,qk=y oҁnetI:rѠgQKӦ HEW᣺j@J)0di)JP3Eb9/ӱM{%+t7SHQ!1=>XP3A8"ĠdShN@Ej'Y\_&?ϲ b~tagP"MmQ«[Q@6.U7Ux^pB}؄EPkikF {k3ul7".(1Y-ҷ8*򞗶<bӺrlz"y>>Z@"bĈ=|ugaٖzVx5,]1HV- ,r/v||˖odKG rvAD㛮y~3iKo>Jl^1f] GPR*d| 8-i+D6cFZLk1mvhAI=/V[]tKFW٬Uv`U\Z&p?,CV!{0!*"o#;km#GEؗ==mRC63s aOjk"K^K?]VKݲ$dW"{#%&4,zjkc]roƿTh_{33l%V닫odŴX*" :S*DiHNaxoiexmUA;l9PeK@UF/q`B>syzЯX@>yE[^i[At")T8bTOk5VY_>|G0- AJS ĩv!ă2L,h5Tްvb[ɞOZyẄ́@_7;}j|NwCo>_7T$52J1ꈫ$K3J1,*A46v0Wv8@N`Ğ+} NW^h| 敖WG@ARtk4w]{c=e 'ƃjǹ2D&eeh h9ͣ츁0c2 ?= ֚P7݊ e9YV &N(T0gz#Dj0μS4QL"%[t! >|A9poŠNOf53%#SJAijBqMB7ZQP#EE DS(=}Ut{Kʺ[I[@2ͬhJINu04P ?|"4H$jBsN#^:HO 8 #3I{I@]`1D(Ԯ"5%lN %;%gQ AtD-cM0G#PDD㸞SW/e0!e<9??-˛zlAϳ@;(5(\t村~ۿi7G"ܹ)gA23.cv]ƻ@|8yk[3S$;!gJrnҙ= GhbTk$JUBZOHt6fZt\]!uwNdwx(wRi2~Cvޥ;:76"FMW[GAhi<;݅fQX> g0b^tgv <8F6?u:Ȧ6**4q#[FN6Wvs*>k]?Lur}?\ZÔ0m-A?,lİտ C(/w| Ͽ~_Ǐ)ӟ??u^FhIc8kZw/KѣD[M5hNdwyjs sr>^j٧|3ّfP@%c д'c>kH-ѣ& ;2Ҡ8I#u)G3K]uZ9 ŷ1 Fmai'y Kv&1IW8JE,O#V#t1lBHR;W%Y+T^kfE4 (yFoW 9QAFỨ]a}x"Fɵ&5Q`C<t&sSXn4YMU#VVCS- +ீ +ீ {Ӓg,nz?=E[?}ݟP] O)I(GD ]ZK+ti.ХBV ]ZK;mPR(tiū)ti.ХBV ]zLj ]Z_*BV ]ZK+ti.Х)0tХBV ]ZK+ti.ХBV<={ ]ZK+ti.ХBV8i ]Z\ ]ZBV ]ZK+ti.ХBV ]ZK+ti.ХBV ]ZK+ti./ ~IdV?vn|% 2V Em+'sDSVJ(6&/ͥ,J&(mSV頍q (zF[I{c8pwG%Fmu|4Q-7mHCpTjXсӴ01\ĵZ!+j DJ{41ZE.`d[o gF>]Vfn+ossYʝ lW:҇vt:}\gL!_FRk},̺<7}*>\\Hvzc/ݺ7,5xX{Ԛ:n1*̂-+zS(fo9O(ȪwΛq-_ [ZZxZ=ʥ&xR5.F3b׷TZ\¢{T6ҎMns&O S {n;i1 b2)+Ǟ}ر봉'NPiaΝh\=.*{yمl)oK'-~zyl÷bN8u%GCEkr9q.βO6IExjð". W:;ր/8N#**"\*ם"Jo?{܊ao^pu6ww y _e "E*RHxb*'y`NJBW#9' [)8(=>XD; D%4BhSx2k*Trn,~z;BeA q- .ip,;,̶Z:Տo/}FB5V>_~l9_-Gp(Ҋqu>v"_MoCӎnjvZB8͡fGuY]TG hUۋPZs$HC┡nE*$X1$R*&Vt,E3K lp;a'uZOz^w8*6`Qya60ceAKpb&dYNI L#x۰+ae}CT;-( LŔBYϣBe x"^Z3#.$Gdmhأ-cEސM F$AKY\p^CQR+YBOj7*.n.ega]2{u 9  儱4SY@$_Y| l& K]3G(uE;ϓlcwVXL'G*:m#t:'D1R4D9%@Ң(:rO cE0*'S޶MZwv}BzD 4A>{K7|ѮpMپ%Wsݭo⺝^~#첡 8khuIsHB$:'a)j2 Ze NMq5e+PգR*Q4=@u\+N*JJ*×l,$l TbX4^YPH,XŠ +BAD [+y"h*;b2hmUH$g쉐Bs j jQIO)ӯ6H: .I&! XiQ3 $$1)h=-Ts)f(iiޫґ!Yw`u'ѩ8<4mjEx;eipN)ٻ6,2;Tav& qa0SȋUEɖ)v~w+ݒQǩ45q9l܁af%r[w8U ^X396kII&b+ڪOD_N L*7}IJ &jDbJ8+VcE2|*/,z5/#ND7hJCxoe&n&^'gn\EWpP. ?ɷPwGC@\8>7ѥѰb/2( %qdF`&h!*1Q=Tjޅr<άfyY̩1̣\0)93ҵN:]/Zzըd?$cyŲO[hXY๪\ﺻ ˛Indchsץ/frӀC.v^ uҺ=lpe#oͺChYAE1|n7mooHAyfz(.^sc7knCn-  .IsEӶ6dMCuSz0(T`]g-7z~.{߿_2nCK/޼mu?4^M) <<<њfJᚲRO3$A#/mpFjn%U"Ez;m/qSi^D&j穔s#MȎdԊ@ )ȢZǵtMJ6}FIl-!f5$eO"i:KD)IA%S4ieN %RѶMk9ki$$}M4Q3l.<ȵu@~$a+N K%_?M\4`e7 o$z~Օoz+e&ޘru4'[4\qfV :6'JDVf>.[f=jӇ*-Вd+ U{a 7OoZn$H5_b VRƢOn`"o}W=v~ ܊Ct4uƌ~4ag[ D1ʃ Ѥ20<Y>.EY(/i':-$. ZS( аq{c΀ǹL*&&RaGmiDqiegi_TX5CjBR{!VN^+P.۔3KM^$u0b C%-eD^L}Ip\_u IqK[Zfb`Z%9mAgmp" F5R 0BR\Bm/n>=k­W{ 4 Ȭ ;8poV1yDN'w!RFlp%@?O Ej*ԆIDENJ8Cdڙ|kN"x J:(H|\Lz>;{"u^y#۷ӋՏ*AHCKIh \{i5qHkY8 ycƌ<ޭ5[f˱Z>{sٝ(,'sv>[nov%H]MLnAhI4$#]4 5J,cj;Xvhpp͘tk^#G],iԦRM %MJj@ZHXm4k QquJOho4S͗O O߮a?'\_[Π,?_P!ai6h|Qyqqp=ޝ5R1[b8{ s`?>Ï?Sf>?{$Ma pNSne dWCx-U7͸%7qf""İO f#CM1KKE btۚxX-=Z$E)Brp h D9Xx0rV'NSGҥ1FM4}/0M/3mt$$5'YYCNu&izu#6`<4M 4va;3C8>H;)]vyڐ٩ᲄoN]ﱫ0g@9Ês29*ꄩrD@9MY˅!SQFʨdy)LbJ(2 << bs\8ELF}7,uk,xj߶;qC9^s_J%"j:.04x_eXUN3*W%eI,@wWúS\5a]ՠJ*?Uc鴱Lw5Xh:!V?\U\ 9^T7 j0:;+x ;?nl}oR, =m2¼]J 曚f,m>үcDgýM˽\?#ì2䛣䛽]) dAoN![:;,f-hyFj>\||Yșp&2&2sͳFm323ƒW5-VL $V{uv*6Ԯ>oIvmqv(IʭQ=@{gq}}`G'{w;W[PoD~+]+ʄ <9"kaZ᷸\awcId~@QwpŃi[SiEYItaXks gO1̬ 8y; t}|y*ח"ne=kб">z?[_lϒL~73ǻ7-3 oy4ilEN&!re1DV>WyUeҜ2L%ߥ o Q)5NtM+{\&f-(n)0\@@<Ͳ@L;mkA,$oe!x{+;tZv cAp iChq36\o8o%V8~ȝYe[nNU9ut [*-ZGhuZGhuެ#5X:&     1=    c?c?c?c?ZQXX 7%zf{<3x5kn;_s0 ӀHR+yξU.rݠ!/WgRS3{I 6**+DJ2>G2e/0x <  -M =|>Yzz}a`-7Jח=9\VNɥdfoM)zAyo2^ՙ\Dw;+ҞΒ7lto^X;:mpYNsoM-W\JIuF;mNJ+'UJahmݸ ڂO$}zU@$Q2Y od2,`w=ȅƣW"jf`c36 rHmk;Բ/òTo$``2f3A&YF&Oڹ.8N@ y y5I NmV˅|'Lh4 ె3rrr>9YsbK,Q[g>E[-lad XXq KhrJ*Gbt3)m[ye~tG·KsKWL>2muhp"3!rm%N1%3S.޸DږNtD{.M!*'P+8O'˘fIXJ 4`:>dl:-A 5!]R~gyCԺL*) Wft5M׋ǻ Qt|S=icf Z_QQˎ&kc0C8Y=:휋^CڔuVOZ9;݀ 9ٻ6#Ug24zch`ZcԐE%GlJLcwVu>ʄbQΉNV(X\Qf5x2Q̵g$LB#K ކp=KxntJK"'4䳊>6^@9a=R#%Fǽ]eBA%%M\@IJ%mǧBA@N?0sE(H!zk%O$M%jGLa=C伳 ,BsP5ǽtԓfC# `f'4`_a$ $IѰ|҃ڣhGp<ҭ.Kõ#&Ǵ LˤCp Nxv F-΁i2Fq\j"03mYDplXѣ|ccujedHEa -#Z9)S" =DQȮaZ=AG`>_)Wf9g&X;׉8e.]R2 H8/f~^xr`M*[OmLzFujCBE*WC[q XLɉ҂1@VPҬGwm?x l)ze)4ga VU?N-MONMk>ȍCѕW6)?Q}osivYbf׮歲[vM{m`fzڛ>^zB hQ9i|KAc<f!XxFO_maPm̋~:BV` 9EuD BZ` w=*4p=qޗB}v/ ~rlu:u+UT)iq#cQ0TdquB;^*>k]?LU^~?wG>P2MSó~.KPOc(q?4|5 /+!jT=*7LU"׏ ߽@&᷿y_^߼xW7(t//e4Κ!{orkhko5РkNۜdڜ~/#\/C%B4;ώL *!:쉻SjF&[5=lΕ ;<6e*x•s۠3J2Rh1 [ZZ3jkԺa*]Y)Es*+tFMUNR lFyGժ?zݫ%<ūhE?Mx=@JqT\n I[ ɪ0W ğHWb Cp]/ş"JK;Ee>-nPkh?4emLK4{HPl# k,N[Uӕz1^{QtyU^ix +WzmsoDѭaGBxβ Ɇ1Y1Wʶ_IdWu^4_.6-c׵ř&=Upp$}mx:F eVѱ=ۢ-im K{sILxD̢CŸ6mpvtWRg~v2& )9  (*QNqTb 6vk \@D 9g4h&(D/8JDU9$,Y@o@ke2q8xdv :\QsmPg_9fy)d^=#hS{GFkb%2A)g"p?, 3%[Gkrywdމ)؟LfSẓdʖ0hէyjG_2sAiIjgTK닋odɴ%XJ$ ut{i5꾴uZ72؇HQY4OjG \Hk9 r Zﴐ4*i),B1|6N!e^5H!YWAC5A?#y#rPեaxͯUwZO,+/_YeoTma}vIRzOT6ޥ&5}ޥ kta [wccgwaws-\g]-hNS^z[%/Z޳gl/UB} ِ[D׵ĩY=qjI5L_a"Z`mO>[`i? y> ;hDa5 n$HEKa+9ڠ]3vpbФ*r]JgF:t@Ny>YcTCHnD;"QDU"$6ɻj|ZD7 e*vK/8q3}+KO"O۸m/|!pҶ4DDccZS)aK+ʲiO"P`' čZ(n$D{bR &N &dHfEy:鉶v"iZ=6Ȇ|'z9Ed]Mך[B7KPvŐt"}ç :Б:(h[h,Q$H7M * Rr% m"9ECD@Gm`uD`5ELhIO*yb[ֆW'P7.7_jM)a|((^r!ϗL˧|2a8₈SOɂEb8ׯ"D qQR)(c4Ss`yJjzUP~=(F=vb0SW2)0si\!j-:rrje.y}^]o~2)yVBa6U!}N[xuQa.ނ%/DdpDB)͚\HCw:=6HQrz)t׼ƥSxuS(bb aB9q.R'3ԢNrK`Rk=e.4nEC-8eT!2i4.hCS0$M MgOÂʐiM/E}G+|u=O <ȟFח e.q'KN(u0g$z#D9FyErND]29VYK~pSiN&q<*2s:iŠFHo@S#EFZybrONJpp )k0>oi&Br"Yj@*n}4i@@&*i9#!|ͦ: KI\dQt!h AyblPΎL= ,(:Fj* (RvJ,*"HI "V1JM0(棌Q("sR\}POIY}~yÏ$o{?? ` *!%'z ZAfAA6(7Q{S?۷}ugotC77 U.AY2 .cU( n񾷦HfW{PȆ)ªJbL.դ [ /) ۋ80_P$ZTP{('4 b7?̮um/^#bܬm}Hբwfŵ^NoǗ$XZe .6s*!F\ʚf{ulb_+SݯMof\-GQ!,&skӠ}S-7{d$ G_^_bsqFHy$!7t4 kF7L,C*4S|47|۟nY`͋cdE6ڴV|2;1G/IuCozY\] ^մN+ Tu/w|? ?KK_nǃ8g3,yHGٳÀlhrHg]TYزTzad??^ӏw~?@|ϫ~?{9wGM&p<W?4m ͇ ֪gϸ%7qa }vĔ 9 *TѠ&\% X>Mت :R?`D϶W$)|*)Ub AG֧NIznk\|9G3E6:ơ\9FBK|AsG/H֥#s)n|$>qhb@339ykk t:xiyG,=x##6.~e@0'|@sWStDgDKOi!62∘2.Y(DHß3&(/|tT#*\#.{{dPP7AE|&.XV%]eE.&{SlO(CEKr~7}q8:0f-g\it/j@\S٣fJ&.^3 /\g2zaB'I{(9 n6mW9دJqi3rͤ]wn0K㛎oΒoN; 3~9}~˟Jaf*KO4(g _V 1fǘ/ǘkuǡ |uŒ133\0OTz05WN>JwMIGj۰vOp^jlItڶ!2O=uRܚ t\oqyu0w[P᎖'īΎe[y `<3sSUjؠ}Q?v 1Rd2ǁ߳輋'NPiΓhtP|xyPUv1oKl_wz\ ,HhxIA$) 3&UPsJL>6Nir[C䮅&w!-¦Ug]h|MSȑCP ׄ4~:kM)x߻&Rw*!e~^BQYdtC?Re]2&NdsUq E J~1 ˊ|rAN$ _Q-\Ŕ,0^j ʨ,׾p̯spϗKF:@BH*Yn/ jcSe<SSP٧0( #+T@mAD:KOΫ|.՗xמ}\"Bۀ\s?<^߭ۻ{rTu6F%18A :gDB0:QHwpkS1ڭ=}C[c D.lҞM΋3CQdH$'TJFB ͳцT!S$LgIO!Z橏!h%@ɀ:v"EʟI>W$'ߟ/5:9[ECKL'U5,Z~} &U8,lc;̪*?Y9'톨_V Z+B( &Y qA8 !z NRm5eNaRx *Brx@H2=уC2.W- :EJdZRmK֦[2RZ5B'?NP?wz/K?Ky FrHVfuFs,rM%A3tB텀IL(J>&5Ǭ(^Ɂ GaI*ms~*P T3<)D “?.$G#jkG&:x%Dip4JIJS`)3 ΃**y29b{ k0wqZiWa'`뛖کU Yǎ*|d.eI(U$ȍ!$jBĈLaw _hoc1D,)E3-Ea4KtXU>˽#jM*d\Ɓgr㋣eYZuWػ4TB% rØfQJٻ6$WCIyg~ڞa=a)Ml,7xf%xH]-d18p*)"0mÇ 1*3%&4_)Vv}:e7P[ky9^vJ?ާ<1LrsCY).u'*Ƅ a)rH)9(ZF)\뀬Fe˹qTϜ[>LwCrN@p@{%cqEh]JtF3(Y/1Δ! 9R9N3 {PoH\056|Vշf,[5;/b/QPw 7Xhx% . oIJ%e&YK!OL\*A' PSAZ/|rYJ"9xyאH%;hVcPOzI = HPE}RI*#,%IB$1)/U(=mG3㬒#5FO H_dlo/Q60PIK+:HQwKF- i2 r9."03<"8Q,r6>0_c$rnW=B m`3AuQA By!eS*(iLͱ](agtގbWn#4<29e(؄sNe&MA 6< M~kkTtB$/)guD.E2VD*R!<1ǜmM7C4XBXC#G`#r*bTClSxtoz(כ"zh}_JoPJy;Y}u"_,fxyÑ ^ASR@ok}oމ$RAеW}\_.: 8)Ú<ՅI@ T06"')x&n݋w igsz:k9 4d7 ?/|tTȬf^ٽ0G7:;ԵhWe" -Ns5+q!C)}4 ;o޽{ՙxWYR֡x'i o^]5Ӌ1N2m?!zTBA)KnTǻΠÆ[Ǿй)HޖUiXTG3>ZqFb{L@1!3R"IҐCOjyN!z_Bk=c öBP0F[p92%oZ+P ͡ T7`>ܔwgU,3BN7iQ_x kF` 'HFbP<5_d'+Ҍm6-~|q>Uk|u Z礹JdYfQZDB{О")B{О")B$PNOfq2߫BxQLF^jt%JgW$Y!8 RYsE قL;~(U;a8j GY=3UJ[Hu"HQ`\n|2kJ$G]QPhy8i-Z(ZsҧtWݽg42h%u,h? ;(`IШ8 X+fc" Fkp3%- qT_SYŕ ,z5KQ 2$C~Oz!xP=k条9MX6l0hi<@, oNd߀f;#;.Iqa{.qT^_^7`-F W8DI)O0UUec[5hV12؇$5HT@]Q['cÁKe-Hn*2ͦRRƹIR:1IPDR#in'L+Mxj56h] Bzb)mx1FNvd,ƸϮ0Kx hKY 4Gedq7_K~f/X&ɥO>D^Ps#kAXH uy末-<[=n?50ɕpMY('QP`' A7ނH*RQ"D&A) L"&NhpT(fxZ[`L4n8>/ 5p2.'[cnZP1޷;hTpA[G4D Z1)|Ġ *i-f9*gs)GJ1ȅP5Ex)Ze<!ytƆ7sc4>wSUfgϖpyL)ý٦家+ZD%\8'?ݙut'K{ƗAf2i&¿JAљ JzU~w LyvS:v:ɵ?~j%oVw@pu|wn 6ʹAb}ukUN|cW?ĩt(݇+[/@ ) O ƹFuuЊN8n8!V4[X9%0r{ \h\8/ sULjmh t㦆SŰs2d?cyҦ\]Vl;|M qL1LQ@%aH=GG-ayErND]2Q,sIZ7 ߻X5:?_f^*xUdZK:iŠFHo@S#EFZݗbpǒYRʴ{㳖YW[ dDA([(M& tJ$Gh$?ڏjJ$|@.RhD:<16pu&מC-#5HUնz );%g$e@b&(cDȜLbdocRn}C__w'YrV^vVBK\/28k|YAɕ6Q;?kluN׏~p.&sΠ,rOXew텩uΚ!}_ܢ@vLVVcCrqh&]8޸x"L1l?(@Jh=Q Fb~jb1p JoS/+Q>.6㧁 1 >ЖB|wY{zzYbV!!ASyVׅ4 rj-kTeFM#΃9͞L.F|Y97oWwQ!,scv׽-7˵]quϷ2)k_,n/ըITc6tS FPV*|@.?(`^Ŭ|>7ɖ V=9F:dUUs%%!FHX"m0i AdquI/hg0=4uCh ~!f9~xfW_ʒiYr(f_ F7}~D9?*[RN۟~52_~ço>ӿ돟޼2~ԃ}C@ Y۟wdS]c9]OO75قW79~/#\-oB)B},Kr.H5`(eU>$"5l b}OYz)^9 LOv >hoH9@le%Ouk$0.E>p&oz EƝ1LG4'IyD$i] H8Œ CFuqr;ܓ|+?MG;?rU~E~~; t;cN2;oSx6c۰-w6hRmh:,9/b%?e^Yԋq۰3E]5F5nм`rqQt`h4+ R ( D`\~=.‚4WEB߄MY_*ìȲoZ9K9XCnodA;in.#nbs[+Y ؝kzD9ϬԮl|Y37pf˘-c1,C+Č2őbkr?1OP1Ut]ՄɪtנR]{ܢ]mo[+B[ܞ5_@>u@Q\[(P,oo˵dbI[-{O؊Epf!gidm[*ӋF A>hmq8N9evZD+tc{ S֍|aТ၇^׬w ݄ۤwc ڜΗɂ1}ۑD|ţN[b Z}=}-fniCه͆gڟ+CQpho@.;b!$LRɘdfB $(1K@yKJ9Rlu0-0!)/xJ^P) 6:"wxh:+ [I3傋{9&KƂ-b:Y r!dXm Y%HYZ g;ʯ:hB,j:7!/܃ՔF9 \,T(FnZ<P^%K2R9oS-&R'm[%kK&gvq ƺ8v4 ytWyѓE I7xu:z Q~ CX-` 62ˤgd]>wh@`|!}D첄A &(B\6|.h.ĵ )J|Mo50R|8<{;my1Rim(jŠ{ :ӮyUדN;$򲞻T;W7+^řnKڂ[#=a~{Skx#aN>z?~?M?a S?#ǂho>vA4֟|d4G{4ZH̙γ gaȥ IRP 0hGa:;훽: <+ j7Be}};摻[B5oK˝ )gkB9:H!x keJKU遼&o mWqUoy *.jY^[d/X%4B_t, ]t y% ҲP)#GeXnF#SL w g@u&UvTzSY\c'F3v19[Jhm&X59lŭ}O=Ct=޿Z*5;p8+ I xIB@K4z`0ґ*8@4^5Ռ]I j`ELEKŪ'Gh\*>T3Kr9Yi5IiXhYI֌͆]36D)MGq' ]&Ko~0<6tt4OϦ7Ҡ.POJVU h뢅`!9+Em谅sgU;-xK8cE؅`)݌caJYDhٲR1ID:#j!ad ɘG)up4NTRf+#WN3XS/ZؚMAI$퐒%BӼTvR9-a3;D a4/F5xuq$ ߾ǿL&߿O;]D#S~ wݟ|w,*"c;Iؔ:6v{N|w5U[ U-W~|4"S@A%s280د@93y9u)4B?PdQ " . %3I3Ec2%T Eh q ZU)ܘr{v)5t*"Q) He%"V. 5_=goLf +j>xPOޟo\M򔯟?~w#=8~|/+-G.&rlu-={%JB_-o8NVpÄc#<(Q$?=U>IAfDYICBh zDNV͆}wf)^= ry3BOy0T+wdȆ{A~^msJuSmyFk,чs8=f(㧎ԇ<~I^|eSWZ<²TRT}[b^=Xz $ Lj4Q@HB,<(~ L:E&ot)+ 6LckkC=kЮR=`lBH V/ʦRr@nvy1. @Oΰv]QYuE3EDE&׭T3k)>$jzQV`ª;lQwpTW֘R?PN|rߡŀfݡA&R(ǯ7iR@)RE{N}g 3V/\dԟAyYT"Vh}(9V1%p$STVŜh_1x/YB9p,Ss,>v%VbC+ubj8xx()5(h,RD6K<1%ZYbh;V酇rQomsj>/gg 3޽v,78ieĘuTkpR1nqknzs<-^Ɍ;\FM/hvyP-nԹ]N%Cn-/|;aT9Ys=o̖edszM,v<G;/BQ{noBFi( [4EZx/=?/'qɚ;>cQ{ K}QurG܀P'!ȡXtY.g;T 6/W_/#29^Y]ϙ>}<7035{sO2ђƄ9Yet,)DBkF j he"lڠ2IY*%: &۔aJH˒Hxˤpv2^DzkR(>}U,vlZyws .L[9ɚr;?uel\FQFs¢JHJN]mo9+}YmḢ\fv/Y,p{AYYd<+^,nI)Kv:H[MSS>LiǶAM^OJA]ҷJ0: c s798hAsC)B"Y4ޞ}{>U4 7+}ʺ)hс bf5*-]g2f|) J7KK-M;Y !9Pr"K-b4̺ Ql`Gb^*R)s!d,"R l"`EATJB YJ-_'COt!_t??pYa; OiOE'&Ra "i;3JaUʅgh *HEK f_, DkΧ#Yby#p$GژKqrN-%I'n2I~xs28IC̯-G͒Lw]bV{Eg0=<\g|.Fmlsi2:v1Łҷ6x5 .]kDF~X4ﯵB?j1 yI= ;[.hp( n*5g[5+c> <Ӽ`ּL6Tٕ ﯧg>{ь!.s~?J{;+Ah<%pF4]w;zN].:o*,Ic[D3Xff1ѓ{_O7<l]u׮g冫[IMБrcEU(]a󠘫S~ٮ̓t^jin4GI(7m[[2O%FW4G㫋(10?G$=hvy/HI_?~Ï?T~|ϟ3~˔,i'C !yxܻ_>m[ Utŭ%S'|q?W0o7\քT6b&K;  L |b?Zs'ͼ ')*@Hqab`xZX>E%r&UL drZc2|y#?b;z?=y12[Q[eCԗi!8rcbT\\z][bP 6cA)ɬ h7ɩG, Bֽ8pvN6s@h6,!0JǙ4:${.8.=P= Pl*:6Z9nQd*!Q8t16` 3z07Gw_?#uӭc4(?0l/_A\#jއYY|IPۛ $M 1=}_cjɑ[ZV[o.0cA^ۼiw[nZD]lZںyoUaVlY7c, ;-6!v;E=[Zb;OK /Sc˗f+sͳ5CNXRRl<D+wu씥t۠ֆ~|j[7]EVZtO#\o?>l664= ͗--,w[W&.`c2y=iN{qw [^vFmn;dLfb2.HѲS*,e5IV O7/<Ԩ( EM)oIw}BvtTQDF\3F!edamR Ufx<B&o`V\Ku{T V5+P2^J/ړŢWm@Xc14&Xdϡp|+uoud0Dili=ΙrңKLZohL:\ ֥0MvJ^c2YϬFxpH/.\m8afGtF*lO9/C[g<s.h,"2@x_G2I!'879"Hc"z2XzͭR_zvIgҭɝ\hdG,\Jɧ*`ϙQcI#ЖqJUUD&7_ eR 2 3dPk$sVVֳjQώjnԒCr$܇9(2 ׁUA/:<,G280ʢX&z_A[B )8ef!  \w읐[DB鱇ZՠV{?֨u/u( L:+mWGaid d .2zp:6!Px`QFy_c t[ $G1'8*'4 #`fUm""4B]EEOEȱ) L\Fcu' QjAV04^Jգz,XƗ7G4Ւj{ ^OfUN9;&MdH[ QN }ɁMd)Ldzy? >p&/:?~'gyV|5|6* )/Yݫt59ͮwV0=7x1ZrGAre/R!F\x8W6Y(oRL,n'2Kf pcg!V}@W&c1BthLGIK WeDk%E#s|.`xqiBm4Y,gv9iPpFh0veFWL-; v)Zl:zwۨ!_7U}{)Y>ЅSϹJ73\q^Ͼb,כ;+gR/EzDe |V)wX{LzL*Lhrھ K# 8H9{8p^<\C0a\3udeVkevJd1.S#R &[xI-ºR).T.HWm8íd;J<[@1i2=/}|=`Hg>!a F1/tt_ a22&C)ƒm %ѠỌrC:Ӻ6(pr> ފ<,DPE~x|S*[\G6On _)CB&%VY Cܦ=pcp[J&Yd.s1Y򺤬W+q_5yP}|aM<'k [AAP1N>p&W; Vah#Ij5)E!W|$"9^EڵYk g/WmIβ^u]UR59y*Lo YW7.S2Eo3s)eL [+۞ZB;_-%ZrrfQ G{J@q o AYehr{VH4b̒ 2=)`@eb^r)gp&E͵t+RZ2R-clQJ0!cK>uyAf 퉘I~xg 7 '7`lV@#,I Hm!a0YM`u)`xNKoaYi BɁ܌ !e-ie%%YEHƹv/XcoH,rƴI6ke4 rQ$}@H:ind 0=]C:车k>5UkbKYDq!mϴ#;d$J'Z14MW"$Ctp*D(p7yѰL_̃sIsmeII!/N'k-!b- 4r\mPf3+Q*휄 bu<m4:iV蘲t}H|^vևaW `^{sIthxԈoWAVb gZAH *,ʬA}Jm_cU;AXVȼFi %^ ="iCeU\T,Ue$T%G2+A̅Hey[TDՆcfw~b ʽq<`;C3tw==٩Fk}Ƿ\z'0qq+eDEc&Y8GjiS4QD Ney] ѻRB|񈉠CgOg sȑPAKrDc6]lq8y5Os#ګկX>,W9͏]+8st~SǷ??3gV,MB2<=ct#Npt ^`Ts9BEQZɘERPpr޵q$2-#Z@p9MM$X` 2cԊ_)J#RP`KcbQB ң :RcR>s,BEΎ.%(:Ӗeբ&:JhaW9)b[RhY!Ep gMvl $h.ax=)quߴ< A1%k98hAsC1BR0ˏ AឧQ`fɒ>zYw*S<%d EjTZ9e(+$ ?_J53H\BȈ, S1Qf] YpL`(RL3R\u,LH:J1LuJ !2g|:(uBiZg^j?ߙO?BГi`reY6״V;R. %\*bOk3y"a,U~x8yWm5)- G2c ^'h)L:qIÛɔIz~o5K2YOƒwdfT~5n9}46 ~\]Lr(}ntMu!0WW0MyWݚjxq1Uٝ^O?/YfwC\>JzkNj4x=mjiF˗;Lom=nDk7Ek73qqLq# Xx4Yɪ`@ON{]L׶rՊ&e#-xt|2 i 8WE\^g6jR25JoG$C7D?WAQCG+լhc?"y3RMq0ݮ1ɟ~߾}~o~oD2%IIH>{xt7?޵P]uM-;t-y]5]N߿l+ԫ؈Fd)GC π@Omى{T[Ńm4-|QBIQA6&&'\Ă(#S%ɏ 9{k鍤6,Kݰq8D'  mt,$dYBr>G<N':4Ue:*FՔ+3qRU]njͯQ |)cω,.r2ox*MDJ@t:dNDGIcy? 0\ꠒ(dQ I,h4 SX&8 HS d+Jc=X"/ATN4ٍ\w״P6zs9͑-3=,GRѵw4  &Q`$J\"15 AzN9N!u!%6 w|pIL(O#B3$Zr«Y 5ރP7Q1sʠ3% i93]GMv.[*rU^(WzLpp'o}pcȎm2MЍyM^piu7_YU8PX_yLe Q=a>ַӾtǵ<\^ >Y~9pCW_+S(ھ1z&ƼKTdF&8i(qnh6MF\1u3~͟th$ZN;4\͡|h ktٚjCiv4Nj8{ oV5,VLFQ՛~fQ5s{=zsYvzR}O u3Z"#濪:upSS7; U]\Y j)Y][s]]Ag #80j+UxO{ضe[F=ɡ$j,(p*YN=x-ZսzHOgjg7lug2Gu1%P[2jHЧ\]QwLfr5y4jӻ[#ꊬ',7. %O'@0ټn;·3V^BL1dG4%Dd\qa0LLgc%B[ƉG(]ioIr+>ـzG%@0lk;݁Wc"DtFZeC]-$QچftT"x/*+L2"!ob!zU~68{W7[ub}vW\@"`.\PhFȴrAp5C2( {ol6(M>5Ta`GBw?e$Zfcv ym5Ԓ˓O:> UcISR3<#!_FZM RD|G4|2φ;o(ރ nOboeؚ"/h:gئ`j̒;xtv5> hvsOf"pJBb;pr*!8W2Y)m֨TWwBL=`Q5;ޑW= o)xyڮDD߇ڐpl6—oF٧ҡ5!껟0}}8tY\"HuUu[VUXKNx-BҢ/eJ5c>zfÈ4,{r2tsn$ڛ6$6H+ii Q ³fYlf$ d05I\ڥGcNhszJ#1GڔtMk]QBdM 6TЖ%]jVSc ǬvDǚ-c] 1EE>9jM\ILݕe\~LcYp|{M73o3yS#.^ςa&Fy3Rms=abxU27["U_~ӿ즽Vэ,դb;c> ֬m>˛+!(K*S]!*t]]*0,tSZ ]_I:R+Ǔ_^>y>o@ /OH_ʌ#sq=NN hPHXYlfEeJ3_UX?"E'Εj`03[B%:ջ!.@j W[z XVv}c˓чU$xSh'?)ey,0[LF@woNg?&AMJ3$ >D,U)00uxB IY>ڠMDQ Pdpt&=b}Rhd['L ר6c.-!Y2zd*G[u, l+2/#̈́q2q1,~kB|zΖwܤ[{+eW,s׆zKyk0UXg\6:0V&7T83) lKzZ]1so1O5*prչE1U?Ɖ zgd8=?;lc;ן}u=fN8+1K>[WKF^/#Gzh"0H˗Xg?"官gJUl^)]x2ɢo8gaŎq0 nLIRvU/u5+//$-Ni=|0[qTb =OPyi,$&}D;^Y:'*/ˡH+/J )Y1) 2xlȏJڐM5)†i[}XG"| ~f'4,~-V.M9w"oOKi)Ξٳ/u^js Μՠ/M\_=|Ћ'[}(z}^n;./d]oy~m_{/sf/x*P&1vmmm/W";k[Zw úZ7!OQ*Hڝ+mW6PbLXH%=Ӓ:~?4kwSO俬҅R[S yeW. !h`dYw^c`O/O.I= p?W\yH lSZz0~t|Bk<]ugoA asoCO^?Z\Wû/X-ӣū˫"/=o-p|QoG8nk|-RhI]퓭[KUqg5Pf]?X1#hjE[z͠3ZShhn;Z@Jnu6f(x'_7:;K)*O_WYX6y㋣5=^Gޱ:-yp[ՠ}7nopbws`uԤc.6{i5G#xl-vPˈ>NA8ռZTZ?UcLwPx#{asl((lqUm[b9qBehb^ ZR?>,K& ># pK#ߟ^s'[ދ2;>-4֢Ϛ/jl9@n *'v֔Ӭ%QKyscl23X*) jke=Hqs.F5};i-ZPųctSv&V}?ЁR TA57I%n*됵ZQiO5NDiPm)Y\CoNl5dc}hNif* H@UM\j)'aZΥaKD(6klN|nlZ@t@BjLM2ߏ=@al/]ܲ.OI(՛ܻGp-Xfvʒ&S`ʰ97,"s;D=Y*^,bhZqމk)$+D aˆK*=[',-"6,:eXyFL9S_- z"U7k i9UJujfST{tfߐ":|S)vFo6Y:RX !@mTԇS,w+1[/ոTQaO}ФyQJ:$UNF7J3B̀j"q\?!W cӟ2o]D1S ;EAI j礼*?% a#I=I0l` j\XKy i3XCfi ,Z*7(PgvJQ"!\͑F)T}W6 Vߐ@t[PS X7+CPq4t9k3)ebFaa8PL)<7 ۷v{rn)ZvJb(!9Y`!J(ՠC; B #nO2q`&'W}lD?W\Elw!5M Q Ň+WP4T ҧMPdUI6Qt¾MJg 0P wS@yG a\X@hRaA3{=%34DLBȼFeR̆jLƅ56zOyH/d@Vx!3x`Pqh`uaR$ 8UMMUSL@V"BeU S`š` vvf7><)U 1;X6K  z Хs6߃O*ARA@ )dž F0-x  BREX1z,2 FK +ձcY Dڅu(I(hwLX-&7)ftbm29p ЃH/Gamt& 3I+QP$dRt`ƃ} a0,ƬbajY&R@'$KBnE\ğ?uWH',Y;af&܁nJ]VTE ,",zRo2ߤte Z/VbnuT \<.l:X9ŤvRB9HJFwPPmhZoȵlxA0 e:UrI5`9 Z[ / JKӾP^:)p;t`HkY2!]Z ٩`g`\RmA@wYѤΧS!#q`Cm7]bL\^l΋| +y]a !%=P &^ Kqp8^]CLXCT'!S`O|WQ% ÇsH8,dA߇:)X9nzN=OK?(B*Cp"N,OW' WSjs84Jp@^1 tŶqHϖeF,㑺 q.4xm8`Dcf&w|c&LͫUP3T=H<taV ;^Q#A Le~<%-9bNbP UmU 9{+dZO D 甤&M~v6Y!M:U>MM2:4X٘Jqpn({ȾT3r0ɏ 1lwx߻7I{>(yBtΙ.ru&th[W>UE]Z/5ntSw8-4t&K n +vk@8<;ySv~AЉ~M\SncL21Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1Fc1FcLX =c0ckNF+(1^8 !q@B8 !q@B8 !q@B8 !q@B8 !q@B8 !q@Bkqpop@0`O `q@0J*q@M8 !q@B8 !q@B8 !q@B8 !q@B8 !q@B8 !q@z8 sI$ONq@" "5V_X_}>MDh @4Բy3 ߥ7wS'3笂'1i;+mll<;B~M ޹f!3JkM! 0~pl0Ԃ~.SPE? ?c]3X .b_ \oYh*[hjylngp嬢n4+[nE]<%Nt $_ZxQ. (<5<.tp2UhJFh5BZjV#Fh5BZjV#Fh5BZjV#Fh5BZjV#Fh5BZjV#Fh5BZjVhzꭅ~}L ŅI9 >P5U6T-FO dJ,?z$DDF$8$ ]#|O/?:@H3KhYa$~FD?.~oG+Ёxwۓ퇙][> !Y)n HhDĐDv[XB?EZ` g 6hB/ ~W$ h8MBh8t,G9Å,=LB;zfƹqWxnt [7-J&esh!|TmAdUHb iڄ()*o';I݅o4YJ׋-U AW,[%5-2 1_&󛖚Z&gjCwEBK2U?-hϵې.]45B6t1}2Z#JwXړї|g^ٚM&!B^DA5Rz0TӀbΆ= }``%LVH4Kh8r-Aakɉ4.͚%9 E`ԁIy%eVPÕ3%JP S O\ݩtPW:v]i APSGS]0F)(`}F+]p**輓Þv"8 kYy_;-60}3mS 7ҜO5v<Ӄ?[÷0 (`ǬoGYUN 'ͤ8_mdRs[U 2"M:xTsefLTLT̗S̕5E_XP)Y)6Vfbey1ޫ \A=ʥv6i1Ȏm6.䦇MqV8Zuz;)PzaIjX@CܓZ)=@ srぇ~]2AuKcpOMy'-]M$8:IunCgEt׾"PSg}p囉*q8vp+" o,dOgj+4Ѱ<͏"G¼cN :Nlz*섪0I/ ۛ'Vq]K;d#~M&M\VW[JJ]@jb֒|I~a۪CR/L!U?,R*ưă8 k rYs#tsehn#0>B3][:Q 5$Nt\]v5*/G5jrWm}*v&'k =ܥo6S2 _f^VwɭvKԩU~T:#'$h*7%Q2E.B / \;AU:  C2J T&#LOkBa 6]ܪ$"@iHum;gnR^v&Bӱ-h ;Ƿiٻ޶r$W~l,o!t ,&= dbĖܒ?ţeDzؖt'EyXůuY>\lr1y֠ ''O'Ǔw.FfNHRZ@2H1k)@!Y#BFj%$/8U!P1*6k `- M\Y NWlq4L+Hvj5.^j?kۅ*@[$ʠM"+e1 t,>A$lCFyMR=~/.OgԐ)I!UH1X>$J {v*juhzIdݎ"9-S-MޗFzXz nNelO%~ oF!3FW@C|vkA{,t:(o/5*ys[w|Z_΄ uEv`]v9TjF+&qUŽ ԚT>9++@[HjTڼ\Uf[iợ4" ^0}ՑPg?^8WsG7i9WݧA}L#+VfjQKiRRJiDGވ+&`oUݮJJ^\=qPe^cjكo?ףK{W?mUO?A$LN&Fؔڈ̀NfGTFr8Q8?{-G0ԨŐ~Ζ0N*l?9n-^KWa>{Ikc[ak{P/!hHe&&CB nX3.\B 6fQ0E K:}k{޷}k{޷}W;z޷}k{޷}k{޷vGjM@jf 0hçXPʪ\٭ A{@085}G(uq?k-xh'qJ6HdQCHQ<ZLBTT,ꂹ(bʡ#x!tI!%M03r6&cl<|Kp=7܇gs"v;نm)&Oϼdž_>Qm}8XDE.>a,>' b-er0yKt1Ժ3R Ya`OD#p%4Zb*$N99FK|lhZ֥Q[~UM%No)f5OfT{xVrJ-]XTj{~7嫇! aW6kݵOJ P%gj"6< zCWoUC()# )o[XR*Ge}`=Y2RԄ@ݖ9."JDס-R6: ^GSVu&wFΖlc㋳:յ a_,ǯהzƣϷpyVLiհ1iyg솮]ΦmkS\y+izQfkoBnM JUI^7ܺkfy9 .ltjrˆ[6ݺ}-W=_6\Y٢畖jZmЗsf3ޮ%}uCsS˥qvmg}K̓\Lt 7wmmH_dzScgҡ,hn)6S"|)ts0OXʋ_Y "ƻ(i5:sųkEvN+}^[FlajaþBXo\zZc˺ARr2` @B$caZZ,m#Fj%m&{ ZUt0\TJg2hoXzY .9[|W uXlQ?tkc8jj0s%kd Zd$(G!<(heB 6C|eN' e2 jUduQwVJ3B6JWh`#qʏ톪n5I{ĎG''=^a֭JLFP+cV%FJRCW`/LZ1IbHni+/JYsQ)b%2 rBdKɠ+u8$C䔱gn.eqE;Ȇ ;ˌMCw:gFc\ޮ04%g_??KR[ⵚ :>(놞oMZP &7Q ) }t$ߴ;8~m4yf9'iV5Eھ&G<j5pp&YIa2ɧ`2exzO|3l3"]UX{1dM~ɵqIn]y-U*nt|//_s}ۗ__9~/S9!fE?G}ૡyCjho1rJv9o3rk۵qA(s9_:Mq^Lrd=`h/ XgĭI~q~tM|[,^B+II#E>6H(\\Ej~hT M*; TJ6^IzlkB4-|E#E6}y阕f'-)EDѧ (d((:ĒxgxO:\Lwmo54t'vfk_΀p1mK&]=nU/Mk tj'Y-)Ʀ@o56=`}ߜHNF}gk^Vg\{ɞ%^s RkHu8gJQn05kaPMTàOC@o;ye{:{fG,AYD!BĨ?),IBv"F(}N!LSSZVRBH2@31*`*"T3qNcSglÏk:1-EZ<Ո>v(>|яO$F Z %+oY곚o% DΧ|>g;2};l;Ûj W0Hڨ\AW4m<~cRg2^&d?>3i K7x2;jɛS Ȩޕ57˜$ ?a}>ƬgդLqoHJK(r{bU<2u`UjKy5楽b^Hb- 4r  lzf%J@SDłGy2ښT ) ҺYVʛlu3D+b{CF&smè9 Uz7o&ę{4dBȝXA{/g=^ ωLq}W%4^nRn>g4LV<xk},N˃hy}/]FScrP@c>y#,h4 IXgiS*7XrxML@VL2=9ME2:|hUy)9J'D&qZ&_&L'tJX0tF?ID&Ӊ+ylI!u!%6 w|>/ ͐ۈN -BL9zΆͻ9X9eЇ䆴Gaʑf \"EUy*T$UW^uC00I2 x.i A$dLHY2!Ymto@ns}KeX[w53A}84@N'#4=i=Z#4*-19b#vsi-qcFokmhz7Q'S 7!6L5>k.U2>L?qf)PjEC) ƕॵN2:YYF´)0/uܻoÐ4_GH_>%tT3w"g ^:}.eKo( Ɛ(S16` M6;덡78stn~F"NAA 7ݻl 9N_b!OG??z/rg^^FAba@eHYr1,\X-r d :}rËM +^;t9Â}nʩYKg˅]r-ﵗKVoonk%,妗NJg! Ȫo.7-yBvXxx0EW2x)%䲵{\#`y:\{ W'^(^X(ffahybjj]D=I6n²O qE=}LFC(%[ HNni[2V8N @d>ד4D+:92M1WG[(aeɎ{>xo`繈י LR^7zذ@v]"ꎛ5q?߱dLflE=޸o[Px25u!u[_Z_gy֗}>jIp<8}r~C"*5de19$. #k#rm-w1.U4 .  [Iu{_l‡qOڶgczVMӘh~Fy`lZr=xgHIdl"0JU ӛO;v1Kb9FP(uVڂ?I%}2*XZ$:ѩx$H\,xvQFy22-ǜ@[ $G1'8*' #̪^EDOԔFѝ"]=#A 0bo:fȄg( V04^JP^<ǝ1֩jrtabӤ(n6%M{_;ntc&JkH0됞0;[YӾ9qIXLA2Z#QFgyd@B@?WeDk%62'nB\u]> ѧآOAp0:B:!N >Zߴea82`xn&{Z6|Z?4)jxܚ:o7pH:mo |"j5]q*T$\)޺"c|4 DahX+KL@蟾>/~͂]gN1?F+,Dhyac&XBC_ɮ20n%<] Qþg.5 OWXrm2oI?_}'HO8$_5Er ,m@+->jWֽ]#_YGn\ :*1Nm hL Yr-"Y*\i!d/?8?JuB2a8q.Sh-ș p৙=֋ <2\8JK\7Ӯ1Yvα{q_ww+%9?즨ȫAOU4Įn6cAW*}L<Zn>[ގmZl.d[o®?}"8(ZuPЪ)3 uޭY,!?ߦ-;@`Ka:]n^‚UP~}~W׹$Tl\_ԋ/ %wk觜az'0}"}$BH;kxқ^eْW]ܝ}w趫LnQ~g^UzLx ')\%4I]M*Co04B\qs3e_0ORz fHYH9bni64D$YXhEX$yVK2^wdz&ݳvI D/7F=i$,6(hp*4ϽV%w4M^ڰ}ymx?Yµ{tY&]h4mۭߑj~yQ0_y';T}qe׿8[>On\{IO%O-.]HZ9˳W(?T߁_F̊)d.-j  2bmX-T:+Ay/ymbZ1Z 5luG~Ld:C}nQ 2RuȄ _[>yEpjn3 Ig- jz \@ ޱZ*YxJ5g] ~\i}fox=+L0u[Ja0MgFZڜoT7|W .J[fB?#&%B8TB-tR"LkdKiq>awj4K%_&k[:2 Ɏu:2Տ<轖Ie.h0k~L>\ҿ}>3s+g`9y| .=ns.6y@s` L0ru29=j)몁.Ȇv^_ABLd(#D!.$LƗQ˵;?w@fBV1x04Ƨ"D9cIrIpOZRҺ:utd66|ocml6mlMocmlͷ66||ocmlͷi[cmlͷ66|_0ś1Dnocmlͷ66|A=pPOSJn7Ge9vExڔۍ=Q [ܯ}RJ@)˯xS,}N;od&"tZ+ Xβ&V ɱ;l6F*Os-P,JDG/Q,*o.jK҅-twBQRRtKQCZ\ouus^p͎s5[}k ď:೯WEcbp#~"jTs(y{_vPN|>s58./n"|ȼ#6^vC}K\[N<uMnyL0g5E@HR!8 khJԥ[SQn}k0տݽ^߹&VgX|rZ\V+Wˮ'#${ZN"8C%AS -+Eq1B6WQ 6`܂>`!}n覼Ux`rrkqϧgud .|,}mR=TV}sc70{nwрV))+^V2N&A.Ѱ3@H(U@r@4JYTTlzrHSqAX xJЖqlQʓ$㮶 l #sgk2$o=ۃgg8/ΏF/;ŠY9BZEWAuB):*lde9D>Iʰ5 2T'%tJj6h!XH\ W ΖHC 5;Ls/V{0鸣VbXD nx‹t4,ɦ8L`J}I $3Rd D$ aQIXw #C EGmM"ZEmC*&NuF?t`n{~9O)Wø/q0E[D,b%Uu.R'LH<,6WRTȞNByk["G-Đ=J5[l^Y_=Rǰu6RTk/#:.nqQ݇1B(T,>X],. f j\zs2xB"'\R-@O18SXs&HLY@U JJ tR/molz"Wddzl}7PC\R[\P9耝`B-]Jt W=T4;s{"52 U E.R %-|V"hE"PjH4 LW.Ied엤.Jʮ򻰅LޅL Q3Jdu|l?[}Jyca 7]ϵZߗ\+)MǼ4[wnW@f|R9m>' `.RiW էR,ǫ+EJɊׁ(lu""Qd vH6=6 pA/IW q/vas^Ae F6&b,P`ŒхP@jP DKH3oCԆA>gH*@5\^M)+ɒHC 63e|_@wQzGk>+-Oފr(\d 9(XbBXBNi}J b1$/:ѧYfPO)*Y!t"@R d7 DֈYզj^i/w[}"<΋ut25e;1.,hA5 Kj}!B |!d4k@'86iiEb%X1#;AD&%l' Z$׹BH2$SH*]R!D%mTeۜ 0Wf9_(0MkÔ1'n91y~?㥞 W<=5;Z{j0eľ<Wmm0q~C- <ͥ?N0//KOǣ8=5G}jp(z0%MSq&,l0\|o"{`*=IZTw7 |8U yħ8O>泃S}۲ zuq2|! G0wjz:4@]Lmć4}wۃ+sZ>}x1Q?ҿç /ocC:̉x\F?,VtHt04:^y5{ssO'QkvsYXްǧXIo_~ݿ_ӛwRѻݛ_~}ْtfl=oYoowP]s5]k9䬗 K8k*|IM"fvnG0_ug?՛[A^/6GW"^$%1ar*'4R3ZL"0ȶl(K 36'OV֥>8ymE Y"$VJEP2E2FF $,PEKRgKY:ݛaCľ!W;Ϟڤk /=M 2Tzͧ2@\Khd +*Ew tHykMBӂ$bs#ɃZHcש [el R=R҉g4VZ?:\Pυi3 Q^.pҘ_ Ӱ4hP=xe9a/!fGR.h;+B);wRoѽR=s ^go T¨E +ӌ,IL]Nf62 Io*D4$ mcJ$S-QF+kEVorZǡvrɞtfVg%O>{8/53ySgWx5uErD[mmR9*PxY,k&ZM)z[O1K٪(tar6}ˏlph`ӛ2WP앃m;y$"`*t^gKv0x5 lxx 6y5j|lͫqT IGJdK2d-.SN6{ Q)Ih 1Rim|$1 :W@3% US} ks}b#DoSIGW eQőp.u"%rV6o)d4&jci5PRʽY^)e-0 NV[g`GMS: PQ jCŪ0-;t߶|+֔uu2M)Hg@`)БK###)NGRbґF:2B3m_~#&8DMၱiT dVKV@`FŸ$.C⮒B7]%)U^ MzA3w/]XpwR Zw ݕ]9`0إ+V tw$uW]IĔn\ŬmYɌrӹy 2G2ο#;8l%FIVc_D])T*7M n5i(wrv1 ԬIJ]ih 07]#Iο?Mg6 x?}ñ aiHB !f6'ƞhٛ\A7 m dj̹:%|Ċ(M2Hj"0hRt-:d ~Ɠ{_$]{ 0ЍQ&ƪČ &75 77?tI*>Fmo6J$L*Q*GsfH`*f1cQ`HFB秐307ۼf4zSi}DŽ""@`L/J?$mK--^{ExMXF?n4ew}@0ξxML`RGQJmƌ6>fDce=F|ά+i}8g]@,z46U[170˺SK =q38uNg0ɹ.7Ž?ܯ#d5MVnh}Ҧv ݖVW7<~m7(jvbsiCkZ;ʽ8ڻh<|3YOjUDY eh1,T5tȯȯȯkjt.Y#sBY&Fh)!\c()iIL' 4U\Z7hX)I͇; ZDl ha[0 ^Xن4Mjؤ:~xহY$3x`]Hmwzr7UU ESyux5M^ni/[ 3 =S+ f } bgɭpp/V2c"8=&A#8Dq+ '[jZI'Ϸ[F3sѐUs=~85_y\cy jTOKc`MA01ҧTyg,=>o-[o/ny5y{ 6CNG=> (v!߶͙¬9lUx٤uqt0Xkɼ9-2͘A6y3Yۘ쐘L*&b2 GV ,H#Qera eYRjusL(QmGsȰ,  -ȪZ X7MͰn 5 ]%=lC dEngn!Vq -Mx3K 0Hs4ȥԴd#p P_!^_!W_!T_!Q_!%qDNW E%& [NvFqI5"82Q+BbsagUM`qnĕjIjo'))bUc;𪱖Wm[T$׋/BRJlU fI1ƙ"ăz8Zڍ۾/$-kWN%>O/Gv%Gu.3mUʈRlN8D((N[i;Yd9bYVkF (WKM)0qZFe)A@X/5^#jΈ&Ζѩ7%)tT_J3$] 3?beMVbM"}y<5-`M '`y \zC`m.XhB3Ek%F  6N>FzOTFP'g ]Cg;]]+eW*tв{3Rӗ%cߥ3_R*<=x+aکוCRTZTGq~quH>t/?~ˇ~z?n?Xe 4 ,?9xqoԄuj8T-N] 漲K8ߗKu>w!ObFS?R ]@xr?gW}/Ghb \XD9s݌c"CTZX*!|%9)=4(g\V نwrČ ʸ rA*up@`A.`/)N ,gO+ ,.:ZUmY-a!]h~%`\0s,{/9vfTt%gS4a&54'ً& AYWqDޅu}XJ[Hu&H!fá 6LeLx$\DHrMqMW2كNzi\/(.9%^5 g8`ҏNry$79q~ Q}\Ի'=M :q/=~愎N[XWz)>?u4ߪnZi.]amo d@;\%.s3V0ZR4Z<]fk3̥Ⱥ >aFqbXeDˋ SJozձ줥tP;-u, r{!kZmqɦ˃^k[Jn?f#GlZgj{ȍ_K -H=&^\vX'ȇ;CKў[ih${{^b4PjF=jvUaXmsw=:}`Ir>|\^⾣իW7~'](OU,Ҽ"xEUy˛5y=߱fLgj2m]ǃpf%-ZwBKu:}׼chؽ}Y,m<35VxNF_]n.AH%`d-y]-|PIAyʐ)t9'gĄTx6M}v ^Kux&>ma6S_^Oj_ f~O7]qQ=vRlX. 0'ǎ[^l[jCx/Y^D$db|U7#f &CFPTtN()*!A h F(>3I8TV k#xL(LܝQ X2@Y֌Ĺ.V9v/Q:(Ukdv(&uٮH*@ZKÚ_BRRxZf_ "a5XY甎YY<*e*9h z-4&8i6Jǣ\Dj b@P҅M*`)j[tCAKA9)1J>P0'%ddN![ИIbDX1ܽ~3">u|8mQL`t> vtY]ѐR 86bv)u}(ݯzs)'cP[GǚuV%(j4@f:!Ec(iT=8k˻搶>5~K~_]>$7K؜΃c/Ҟχ"ɖjw6q@GOh_/Wk͍Ayl؁( "%2Q1R(ߺ ]n{A%ˋXA(0eг{Hz|& _z5d1׬}=ᡣxϳC_b4/gƓE8;:OwNN&әFo~q4+wv _{}H\6yd]z|rhm™9O;=:Ɓ>otvR&woCD:Sx";P:HwDeTLA_Xϗ#=@GJڤ:/I< j4,E(:|^i.yFߐ}},髷)Unϝ VYwWo ;.~XcfO33U6zy2_Hp'u/mFQ)tg NNx['iqEnn3)o|lTK0Ƒ%``3Z+xMn>:=¶Y6!iyŘX7($%쳗0Y'ti]8w/xm%AriW+})iKk(f!/1bjeP7g|zw yzAOm3eL;{m=H:"a,Cلa@:AwƓ4_}H<"Dv=zk+zy:ُ<{?>y # D#JO#Hs 1z @54gԥ:D[B5SR{{駗߃!,}IV^s{}|\c.VW9w;M+i2Oߩ.T{kܗ֖y5j18+ ɗ Q+`G!jź$Ó&Fऊ2WzF\b5"梥bC *@p:HihIYLV$fƾ_tK"ӳ\rv_;)|z>Mb7w.}>(a]>Z3AI)2v &Y*eCƗ2 p7_JVmªϮtO/ <%QauV$_\nnڗiZxm%= [W"CF {0.EQgQ\j]Qt́X#n:Fx495D&}1EI5zs`3q~E]ҏzD##Vdr$bPLe wA?:,@ie.%o(ȀX'uv-dlg $ᣱGl&za5mu6ӒE//~q]dz^|Xn\Qw|zJ]r/nu.lѽO[#kH{>÷kOPj(t{F:#AXtbTR17k4ήQG/v}[G÷gqEhY&e,P`NbX@j@FD!"j0Ƿ_6%+8\c8aeh %荠$`kKynWZ@PbtX<06Y/ui_I9KʹyPBL*5I ldɻmp_k3@RN'7ֵjJJV `⡺Y3@ꐲŅXhAD4h1mHl$2[Q0uId 6 -׌|2fE`0tE Sa.EM(`x?Jd22qE:2f4[ADNG3 /5Jkr]Or*~k2f0!h`oyn>:MǣۓCJ> Z~m ewa՗ry1Dƾ'50%T;b烳Z4i'Ltwwwsh`*<{! A1P~`*!:XϹ#if_5&U|)QȔGa4}n7:މWy2;@0hPLwZWq_}^vGPvBTwg\f_cC\ s~OYjѷD䧳ӾӛaXn -jFlZeeyVir'ڋu?/:zzp|zÁZpJ!WYBu6R6>>:IFGeTծܑč7uzy3觳߿Ts+}b?ԕ*Qe~q*ll $e)ar9 ")dზfEdm9!E{$56,˖Q^[E6/PJJFH$ӢD1!N" }8,4Y*nf:pgi\{t&Z`dy>Li,9`3X t'A^cW; t QCu}\ rsɓF2 $dEGrSg+0 ٛ(1l5J6YS?la2CRrF<Rvo<}:?c(EŀfݡEBgP?7yR@KٸfOCW&M{BT%F%KE/Km76/A2><`^^ 7[m$$tJU(UIF[+)CFBQRT[E-[rP:EhH52Jgwl*6mdMeМzkw'{&ЃٗB_cowe6K(w&Ɵ;nn>}sϗV2!*lJL- ˔ }撷{YzRI )o&bKl,_xF-z G3D6(ӹNfGT[ yp})hw,-^cV!,Ob7Hx9h>v؄1$AC;*_O*M0ل ϵ {&<&  dLBքF:(}@[1*hg D/SMDsBo)_ۻ9Bv=厘&|с9VNJCM$ 1U!@+:1lj9NqO"[A  U|kOU@+bWdUw< f %?^Wyk2}6\f7t\_1A;qT\'$Ð?6GZzzX{yN/l>?ͱCi9HU)~U͝/#ZC#[]Sꞷ[нa7t.>d]ߣos95WjΚس4l=}%?qTwͭD%i3e|ީH)Xx 0=/R^bkf,5Vщrh F\q-C51eL, 2VݑJA( 6h&/΁G)9PIgTޙK6!Ol&+żuOK?XecVZ"y|Av `fM`*(kԤyױdY.ՙ1"E;}<1PSFƛK+-͢w OE bAV&[ <+4*s%S2WuH4svY޺!( ,$6AkD+dt\n\'t#t_fB.ew`=f `<X!A-6'0Úޅ5Gb+#OT1/AG9G$n+eQL X=VMZWSs&B8?E*<J~`}51vo)|:Wv=mk9AϗZlHpX, 7#X,nAsuwwo֏y,HX##dUT @F6zjl*Z&A b]SeNc'gg<(NR:p BZR-E[DJJhS͍gP倫Lǭ|"ܷO&LzʗR>gJQj2{j`n6;)\FGKTMFx{ؑesKj7X '.cm8Ah{ļ"i$aK&G%=Kڸ'9wwR8 6uqʐM5idFaL3&z A&g{ɂ';{_9VBsm !Z86@DHV1: o%Vm0xRZ&rrQPi4+S2UmK },V1vm8[hW^p-HhO4kr- IޝU#/ΜUPMrF8ZY\5*3@OZQow /mKx7~"5ox_Uš| _~'ѰKFv+AǍ]+\.ƑyeedĒċv67s}~^D2C>e僆V£דkwỲmk\C|,YʵKpp*KG Kowu_?h_F|qߏ?V`VWN3g秳,~H9?vGUށ$^U@fWYY/zrzEۻ=Ճ2ܓURX`ѰxJP8>OC@:W }CPH" 8ʤ0P)%~5f1=oŽj/·$RfGLXaVڌ@J{g0ؚut=ܴ wq`?%qQ報 EL5~!n=tpp!O~6 Yff5<_ |?~L)nO/XyӺU.Z>hu 5xz?O f>ܸ'^Nku1U6TY>mX'ff C<_H94@gJ D tto-$M8=kS>|WxmeBSk\rebgψ5C%[tKC$ouNk'+gUUоB.1ek$n|"zI-vW;/6\@rr_pέL|LsG6hb~>? F˘ g=X hHcC̚Ύk uj?PpW[{Q(˶݆ޭz[q\9Wqdw.ŻjPaDMƜ8FELj[3LERN~xx]ѳ{cc!*NҒ~&:6XյeNA֐gd2:V!Ú1Mk dET\E+;ݾ˅Mҩ@ :(2,Zae gc wYB~ݫ0nUx#~=9>\J*_2'Ƹ}rq(5_M52_)z-WJq2ٚ&NĀd$G+j5F"@4yۋ\ 8o h0{Ħ- SXMFR:OorJ\my q6hinfY :tr`clևQ?+^n?yՈk9'}$H=mG62I4 G>[klN1m$\RRp"I#ȒI'͍ 9 dI4IJG1r6k[E0ũ/@EPmg٘rU5W hF*0 Ԗ:$6IMk$6IMkڠmADOAX-6ŰM1lS 6ŰM1lS 6pArk~ A:a"D0gRYzYj"% = ?7`UgJ6Td>UPT|H,K#~Uy|$V9"UrL6tDkj1ҘdS Ga ܲtR=F]{`rt1YX?,tt޾*Qqg uՕ+vKptFd!t _ZW뻧6t:oYwH-+j9?cn-oo|jlB0 7vfV/. GGfw=_GqkֵokΦ?knM)}[?֧x j˭u%ȕ,!Dr&_E ^XMK;GEFDǒ,fQJJ% ͎֜{َ},}c EjUG~hOy=V;2G/"Ad !kt>X3Ȝ%dE/=1'u10PN=`QH\TСB%(kh[Xe+#gKÜϏPxeXvJV~RT`x}rk jz+^R6y\é@M^L,*-$ϘF#%7㮇^%a݆OAN*dY#d5*-\g2JyG%HZ?$-?ִ IR(d*F0䱂pL`%RLf1-2qvΐtJ1LuJJ%WkID(9Sz˖A[^ wBKw33w/l^q]h)9'~^zT~kt҃Bb0Ǎ26UmOa< S.9XsgP L(=-Ax_w{qZPɷ[d}J#1ŀ.슝7Z2h>׳јdzԳٿYr*]`,y^LkP9+-d7Vx}W8($mbVEH_;WV"uNy#/gZ&vT{U+rj:S%Lm}Ⓐ4|w&mgiN֝Ibo~z|9}j!F3҇8'^dmY'Q?~=^]|q{VcHrHR?UaȽ,2Z? U,:[͘ݏIX=9`GMrը W74*i&#e`E.F3ȝ &vjļiufm/?Q㯳kxqIq*r׏ͦҡ5 ?k3i5UK5SĩNO% ͻׯ~uO?~}78Wݛ_}o^ƤIS<[ALnD?w#`53/~~hPM M͆(Z&g=]m5MNyŸ5.0o>Z -1Y c*p]YqEZhM<,7JH=s#)*h;Eo߃t6ldÑa 269K(vҹ߶aX7fv&|޽UG뒆eVM4`ujX˶DNVy*qEϋ>ƀ4Cn#: BL9zje ܃PoW8AtH)>L6AnhdJ^ Mv.[Heqުʯ<%c-e {a&80ex.iI)G+ DLhjedl&E[wY{֌m͋ jO/tz!h*gCqcq*I++)q$.^SڊdRIYQ߿pH],^DA%#e `0@ϫ jWF q$L+p4)c> dnlN*EVJ Iv:)NG!=ܖqv7'X]ۼx~zҟXDorw| HiQMQE& QH  BCF{4DxD(cy 00HK>\E!TD 9oܦHILL*rUAV?1A}gM&We7qzri]f-@]jVx1dަklvWQ82Q"E=TY/bC﫠 lNø!6 m%Y)pV8f]MZ#gހ!ׂ;|%!8B!ņ%fj6O_Mt;Jits fLц fw;TpsTl^ӹ8czz't^጗▐C*R)w>ryT.{hg~3^57ZՏwd_5IɥUYo9=pcp#H\/h2w(*:rm>0Z|n7]%3ïo?#kI6AqpZ4vU<:I@Ǣ!*5u1=dCRJ][IxHnJZ \{K*_j՜&>ObӴl>cա8ɾNm<^ONAɰ{vmސ197={ܺ;ܳ^|r`**b4cG+ Rq"Rm^q Xr (P7Q[c3M6fAO 6XebF/K ΤsEFK U5gYʋ`a/be, p>fY :W[Zsa/2 0Fu/"ʈ"6D8yR'ɨB)2nO~,#N!d&Rf ]D1ke4@w&I'͍$; I 0I‡:G՚/$z՞pq 7?jd_\qQ4\li;>ҤΥ5. V!TC¶Z ȭb !}Qq-!鞬1BS΁g (P^ /$m\.^m!$ę˒s)RF޺$L{k >"ƛv o2D;%) aD$+഑#;5gP&:XG_N2|n uG;v#|yEeR|+9Wb!y@rdɐse0l*4&'ch5Yn==,,qv>$ɶb).% y&0Ȝ+.xB9ѡ94B @NBx7Ld%WMLDylS2Vk$s(+Yhg]ly}@-9D[<UD9@p[AEd@d|$ ƁUbh&P?8_~oXC iYbk򎄎%j M;!=GkJoժFIOZaK,Zn#:+ L:+#@/YYĜPЃu2Tl Y~u}NQXD2u93Ёn%{<s[$I] R:"f[]LSWRkы>SHՆȡ lrZѠmd3eZuI%r}y3;FȾցEЋYX;[nCE*i.] Hh&/l"BW6YB,-^G?GCQgv\#WNPmwC 0eJ&ٖ؛C4'ڐh^zp⥵k|K[[頗}אB%M6Bthm28B+J+2Z@ƵgSH[>2^%V=E]}n4ƣ9+% 1B,ljonu>g.!\X*O>'~9N9 駒b\ӯTk`OZ)oLsf}4ԲrO0t9+Z=A9zIG? \>2J|r=еP6Z_v|BU\kK46s}\|C~3ZT~7X/ F?uD2ۍ ՙw#tt4;kWe_9lfs0`69`*%nvu6mm״횶]ӶkvMۮi5mm=7mm״횶]ӶkvMۮi5mڍ6mm״"6DlvMۮi5\lش횶]ӶkvMۮi۽pHSڛ]ӶkvMۮi5mд6mfgMۮi5mm״횶]ӶkvOjvMۮi5mm׬i5mm״횶]Ӷkv-d` ]մ횶]ӶkvMۮi5mHЂ'z}Sj|I#@Eci:)KCݡ iMѻiW*>k z{礵>ZdV%~ 1jfspF5 Muݘho2{&ƴ^iIfG='j!\w6"j, :#:vV0Yї(6^T)Б5<ᤘ!h.9$K)12 eJ[;= f'0r[ҥ:9qUm/}*v4*R2NrP97\>$=V_ϗ ׺qyrWӴXP; ު,G^ͻ ̵ Ӓ4B|_]BSUP4';GNƝ^QƢ 9$;΂ 2vc]̰S?bXcD(24\5}' ,' JHT\CuU+@xi*Cʒ[2AQDrV&n#D "֜Eޅ|:?Tfɷ?vݐu~{L}=t!vbeQfye{ 6RŴ!;f2eĹ"lLp(u-8 W[J5㮈<fO`A؜'*hI^b́Վ^֜|/z*dEZjRS,rڭ+gBʳ,_9KJgŽ4ȽW i,o }+]aPppH_W|8Hf'?O`qBFh]Lk?ǴgOe`ZIW!iNvѷL3I-+whBHo5gb_,_lG? ~}he%.zlkYP/bcɫni@׮h:a@z?~pS淼E-Qt<[{珿P}ANO4{#*{*bj*o8v#^=2<yIvYa+`TPk\v>Hrl ]m-tB1 *m3 g4Of9hr P$BleOyzpkZ<\yu~fT=»W]/!u#76mV6ݹ+~KvAC..JwTR[\=uxU)6f_!R^Xnx}TjKM!4BEo%K[(E+R (1:Yd,@cHƒ:Ѥ)C;hfy=~mNYR]8T+%($NJA6UXUcP-F4jcb'㳳^Xo N]%%+ 0Z@PլEcuHYBA Bt%xid6%g7a(Z[N^@,Eԉ( m>d Zd%d̊2A/H S sv)lR$FٖCpE|/Jd223pVeε7)d7gyfG~'@jǕ['(,Q 2t1ߓ:(].SA&Tj~0M{ cG ^ ;d=H"`J(YTG賳ZtS:gGKVGtgh`*l=ɐHxM-/0k^aԞ>DJa}2A[{JwU"mA}"Zj[ `Р~GiVV=b`}KboQ9~>7krH4G?_]G>rqh'^u#</O/?\&|5| xY{CϻLWCx5Ck ˸)wUBۛܪTEtG0ug?֛G%e_lnǥɒ/%d޹ ۆ$/"|tߴ$5,"l&L!Z7:?mXh=+Rm tGL"*}"Nb#xńX2s;(bɥ^U*'JeO9*w&]=U/YmQA9`3L$h.'gT;bl"ԣ;MBTWL}d7g?RoQk]~֩;H=Uف%^ 3lP|:?(Eŀfݠ5kaM*4V MsԊ7ٟ2г=#AS0^UvQ?9$YT"$+$% ʁRUײ)gpTDTqN+heMP6rҪ!O:#gxs1&ns1nuԂrx;b^囪/'GjkΩBDO,r}Ü'QJ3ݽ;Go߃^l;~ɖ+^tdmsn[oȾAҽ}L>3m M8Lmx& Ȩ6}gY5V{5{dr,=*Ae$ C]|(Fml9e4"E4Vu 1RimB>PŘ󾀴gJ$VTT*gХk`9ہ]-d_;o̙;$ِd}™XA{ne~x@ 2>nyk(t)HtM%*R&(yoy,؟ 8PZk߄}1۠Xc>_v5]0PavVmMliK]=A@m]s&Yz\G2Z"F119ۑսml[sQhᕯ1WazAan7yyҭci‪VLJRe5{J,\_eUYGou_G+u햒#l`Q`l#fsnĈ(81mwXs]U< 9;4ݛhCš)ܬ#d"l&.*t TZИ@90h+:;A"*$f֯C(wlwFΎ$RoOs/l7V<6{\| +{^wv{8=<ۿ^t$6=jo2(dLq'镄RIS/*/ٸW bxŠ}_B2"C;`2D.H&NG1}~4x}QHNɧqwn2zj~$Q,aQI *k)HʜQ%nK1DE#3\^U=4LšѐE{j"6 Qr5s^ty*\;i9{8~cqR/ev;yJ3PfAZ-+ksfJw&OB mhFfw] (PԵe7 uAdGx %^x~A3/tq`E*KĤPb TA`FRJx%K cW];wKu\}pX^xtK:4o\wwO4oD(omwͽstKۼL p1~è`IJaE ۚK-q{qpavh JDǗOŗ9kgǛ7|!Fh$ AjE%or.5z顠 O2(DMVlj~k*kvt;P<]#R lU`٘lђEZ!X̉ %ԉ5Π(2dJqܑҫfjvO{%ٵbo{R jpuվX}kQJ5~|O~6FO+Li%C ?zL?}vl=u䢀CQWfS V+'7 $\ ~ܶ:ٜ >IfȨp0'F=o?Ꮓr4j2|wL#e#ŀ1q'7~g2GM3`4k`0}WL3ЫQ0}>-e_kZ}y:>!.0wtA@wρQF_ &I]ahXây ov2F$ZGΩ2ѢvR)$$Kp4*ȠH8퍴 ݺXOjdN[/N)Z۩Snqco9Z90"نtT.h,R$6󁁧(2JAakjky=G) FdK91XwEe( u&ZF=wvFΎL7hb<\̡-zq"a0]<5Ah>ߡ-ҳ6xykҶ 얮]/.>1.;Dn$mMR\ G}Sos"nf;![XռumWݴziEE+-7C5t{΁9liV ƍe-mӢN$2 ujD{=>J_viT_zlsJWWB,|e3s|vc M6P 0+#R1;$~ڱO؛BOtN)}ih[>=zW^`e#[kβtĠePR׊\H* بSb16kI"DD;Ƥ *d^*% :OZ%{#arPy}\)V^jցǻ!׏wH:1\?{WF0_R$C.3; pb mMdkx_%]"-9nmud>E /Mn*O2.˚AI&ږ( aAMR (1:Yd,ƲA9$5Y4:V[ł{DcwaPM?HJV 5 Ng!e)  *ѕ॑hր^w"ӋqH(:Y  m>d f^ &DF1+k 'M+Sa.EM,` "fܟJy+XmD~3fͿ.FFuЇ<ۇaiV=( 2Y:7C]94XQaSƗA6̱Gx>E`[d@"ucJ(YTYL:)txr@#,-$Cb m4~:nHUh<×~|?OB<1}iw?4 1o/(G `Рie7գѿy8>^Ѩl ܇gZ=^v+?ޞώN CI}8.|nkDdyѻ0³'X{:YՍZ H8x`źOƣDO g<ý.rUU:i$iZ1h#cYM30 &ePPd~qoN_6RaDY? ~98?-bWXv°.CUǙl'eh?jYR1x~yǬ$xO?߿ϿǷ?(~|_>-2c$|b7|]+U߲kAZvԋ ߤ_#UB۫M'TAGV0 U{?ԓxޱ/n+7"Kȼr#)H5_$.2|t?$5,"l&L!kUHz݆%4;{kE6/P:~SJFH$Ӣ D1!N" }trW9;mrl:tza{~{,](Xs^/ >k/'ku.u=` ݓp4a w ;{a_c@fvv:*]cS5h `doYj\ KWpܹOz.H1(e3zI @O)zsTfotląq7}Х;1 V49v413/Z`0m#dի?'_+:<ɩ77l̪ywnevw3U˨H@]/jMkhkm,@gOFn{?m>b@3f7(lZt B9=z 2ϕ}o߫O;x>a/+Q͊8 k?os D)tW-w޾ݝӴ.+^Nj0Xao{fG #SOI,<\yG1ҥ#&5\_5j)`;=7{N=X^Ez  3g &CFPTtN()*!AE/6"HӮ֣Vl+j#x)3 v#@Yɮ0w5o@x: ŞFºZJVm5i,0!٤3IB+ѻv.5e4 eE bI}QYuN阕3JJ*IV`Po;c6aLEP:^"*Wz,< TJ+ʡ,]t)Z5RbL\ IQR19)t %r:ilAc&:b{]{ 97uom]߿of;K+T1@6cf]j @&D[SxrZߡwر1K£c2@o u7@fS Qh<;x2g=SB8;]ML奯'Yfg4]_* *W/'eG.=;;әO>'t+hQV q`Q`l#097C,N7{2 +8SIz%!TlD#“ BA#/B2"CQ - L NB@K{ ^h2:$`s'{7+WyL>oV;AWV_\t"ЉUYC4Ý:8гzz֐GkdЀqhȋ@Da5KA( @z`IGNFUn%{;qcxZ/eMv;yJ3PGAZ-+ksJwhOJ*MjBg7]J HQז0 X+_ī/n_hf8;FhW!KĤPb TA`FR;fK6d]3үKX>8FEn4t9oF~L) a@^w$z`Dp|x͛<ݓ.lyjōO<46q8^]2a.ZӹmQx%Xt6ղ-FG _=ܖ_^τ4Ots%[SEr ؒ79[ k@CAJdP(&l;:T;hsW<]( R llLh"DKH3(3J RowzUS9j`ZgM[.ُ.vV]½Ǫ} Wx2jx>kG\F0`>Y;Ct2:?jU4(GzB&Ȇ$M47l<@EG}7R0i`I>K(2xM8kQP lF{NQzd SZhzMz_a͍Ri1m#7X{+$|A|yTZ^J%_-/^ɷT_V(}<G͐8mf'EDNpMp27RI_`jz=/b9ȮKl5mX u Ƹ?|`B͈k05Iǰ|t.{a].6!inc| b=7"ޑEV>Aߓ_>kB0j|C"Hhj8;Nq/ݙtlkwڋz(,-ȐQ'm.ÄYȚPRdDEq{QtjNBbr]GP[Ic$U0Y&QA,B;zED1"{D|P%I9@yϵXD> JwQ aΑ-}N{fdJY[ fsR;zDrL#tkg:;mqQwg^e cM |ΤGs RCmI&e@ lOIǶx!lOak͓kq ׾sqrG$ ~ Lz1N?O&o`AQc߾F>wù;hT+߇Y+$0I"rb!`'wmmHm,ފ73,a7E21bnIN-Y%%Ƕ q,_ݝS#W,\0iE{\ \ȵT8[n-wXz&I;&urqG*s8.;G̷˾m~q|+E"z}fagj_"NʈKلX"ɴ4I"aebjl|kT$˰AF*^z" $筯C1%BQ(xr6YhᚇV6-„P\R++a-kÌD;e3󕯶ؕ)5,?~ /vf-o_QWmdQ$3DÕPf@$[VnS`BR/!HQVG:;*)K. 茁UpU!6MT] 3R-p Ѷ聠ZhIjV3C{[\QJ@S1Ζtշ̖83V$w=˳ #Y~)+[Rʁl]b$0$ |YނO7PțBE:8QK&P;4 G5('rҊxw;讨s$RGK2ܗ`kЂDcDF1 ةYohmdQx3ʹ!\Ue ֪gh2:%:#`d1$,چӈΦk5ZALD`Gd׈o (XPm9D+1+HJi0$k.ؖ201yN~vBJq`ꈯQ z6k+z%iMV/w˥tWO$O!atܹqq&b OV%҇l!V?Pao ΘBwrZ/02 \?Ho[\׏DdD\֬l(BuQ].A iN1f0̖+]!i,2L 5'26qt4ݹz*W.ګ5UnlGBHZHzh?E}h("M ǵ]h;+@'¤V#zq+s!? r1rMZ".E[kC0TPwb%YcGF`GXb2Ŕ-Y+dGdoQCMg&/R=*ǧKte=7lxW>[OSYڛ k ^kqx*:s1#^}!ꜱko/^~ӍekkV,uܭ5X>DmGCh !c4dѐ12F CEr !c4dѐ12FCh !cG*:wq^Z#J!/ʱg2"t@ Y 1`{:Hw#j^9]0\&O.U4bb!Gdz1jj2PM!لh+ !4T6e&a+E(W|.b=uQ塓6-DvF,;p\WL$W ")HѩšP`)bkJAlWcyRhN ~Zu UMi~20ܒ C ryA(B5 F ܉ݗ77LOFbF[ЅHekyC*ZR+SɍE|iiB"94,BdRg){kTFp]ȁUFS iXBzHzɮg(MEP/YPZԈWPAʹ ) m_SoV{R/ n|@ڷfk5(B[(e|ow}o|KsF\&+ePcb8XnoG拾ãɝn t_֣(@SL)ЭK?U7FĤ}q::ߟͅ]|*dTU4R%QQ'b}]M7F^<𾝠2lh߭O(G{|*{3^{"^Bo\j/Rh:>oJ\}iw-\.mZ8{c9} 5D»kkhhTWso?^*0n2^ ?d~[/hor2ټwx&=*?nɮɺG:^7 jX;Pq.?]l|ѳ1w,Ł{"׍n:hwyf͚Q4vr|z2ֵ0Ձ{' sǭ괇L}Ƽ_&/3ndϗoz}xS_e49MpL~3ad.YYEc^N,x8Y|LJH׷__~|޾_&灨h^$׆2+LM_UɵJĵ6 KcrOqR1>%XP30JS̤[UG FJq<ؙIR?Ĵçct ^I+ ޿@d`s-m7bNsJ]]SEw181"+,r7g7_h c|a/G?X⚫tT/tPt"=[>OR:kgw~|B@/RIx jUDȆs f&/Wfxw@o?b5MoMȫM6 Gf08QιbI:Qy!:O&,mvzfɾig&RM,G,%A3yVQyTK -ՄgjbD:"QbK HT4h'2VSF8à8Gl"s)cW[xu_BJAAA:40l:M.KKsLNLBr{zn&q8 hjz>{s%8B>֠[BD@7TO&L>}:q\L7.ˡH]t XT*i]u~օJZv"i]P%&hh YI@L肉N{^)?(B dC`])&h,!z YDf@8t$&tz'M&7$_vNmp]sGg6Wc }Z$E`c"։B*\`(-c!C5&\R0GjDї4c @B5*2`-C WboD͕ LMg3ߊvxzr2BfVFPy\1J/$Y:@ڄ}a bQ blXmcAWYK{Ws2W"w[uFӡצK\TZFEר]R5+2a!AEڨ[jB$Ue/b!X8 *&R*„"6)3y9M͚ȰK>$ɃMgKEׂ7±y&K6]fg̖OmO׵Ϸpu)"q>Mvc&_\t)vK/>zwNmb)5[p/Ӑ\HV-;iݯ^s2͆T;.N޵|xs=_iyyh-?[}mμr\"W7t|j47V}G~\.t<:kc]oF6a?Cոf6_H5̐%i.Sgvy"5:pJ.U;uȆ,/ b/TwH5H˫(ΒHc1c-CC."GgW![!RIP<6ZEI3YU09IDκ) re;BdtT2mCMg̘wzrt;Z!<5BƛZi B;^ ڀų BWEa *jy7`!!H)C)r]sYv)<+1#yԜ *IGoA7S"0AnZ-VyK&JT#Le(E7@jjWKvCLI@X`uԜbb4*>ja96`.TL*E.vƦeD~/@kWݷm>[*ʠQB@F}potS=r܇]M W]$Z$rV|YJ0FO1 #dv-XE r2M3 0:Դ1`uv-$L."Uk͠Z'˸ٌ9rmܥcVl[c/!6Ǎ=Ar\_(8AG,O>O>~(y";MV#ٿ"qv:G pܗ] ~:Q$ec[d[v"[(;#*6OuOsadމ ²f׻f{:uf?V9A{^G#St)I nVXh|"D"m!-* k W\č~ӳJo$:[*!\ĶEOz>E8c.L TY`;S(II*)F~Q !?L~DڍG`A#/Tq!ɎQ -;L NG9MٓĜGU&೐Bz~Y0w63?ٖG8&Ǎ@z`,"_^~9=f:誒umpa]|I=Uԟ9ixhjmyajr=~ϥ>/Ls/ri&}.l}ɾ׳1Foh䀺$ў~ۏtQ~,'1FZ9QAD)v.BZZ4vy.8V0y˄bLQ[t}U> M2ڀՖU .Z>;1Yo,,f3]f$(8Jolݿ9gC{t;S&p| a4z$;胞zEIZjڧPc#J)A$)ާ%*jzvXgm=j(8;)>l V AC)G%Z&tQ8T+ECJ#ڶxVdGsBP#/BUu[`Zt8kfΎqav`G=%.ԝcx 2vәKS+7:qtk z kny֧j$ a{ >]a[̧zܲ[Wݪ$CJCidOT첄A+ʨAG!qUzNVG]8@z鐎@)A >9j7Z6If@ FstI[6oݡBRϏyr[WӴD'u_?䒏W^R^򤷓OE^bmWsluxUo{=izzwsm6{ݶ.}Mz)g]V5[̬wwo'޸\#Mw֗m+H^ S4[,'&TSH<ʥy bNNGqqjc{r΋PF5ԇ,[ uƸ*)|Nœ7 2 ё +s-{HQɓ(Ƅ H[XrYU:Ky.&#twx=EW<]( xg0yH>FBi[ &9lz(;m}mr'/]Ք>Y|Z9.WgKaĜABGL;YurBd| 0&ɯ/%'I˖IrR(]t F 9EΜ&1H] .g5ErN9]Hч@ ǔKŘI{2֝fؕAٿmì,($iI#bGht@%[g=XB2XR@J -G@MA7/e30LShD6j3sv;iTwY4l/2+Y4Μlռ7axerV|T|yڵȹP6E BʄHK $!%C0鼓*8$e RжEWR*!ZfSR1KJu`)]NVZ4 R9#c; yƾXHG,)CPZUE Br+ 2-1~2ʜ;gi# j7=Q[Ĉ#}P] /$Ѱ' 0RZIx@1"$ DV(ԶM|dr_ 3C EGcM"ZFđ1C*&Iuv~f<ˤID}Q6FD9"∈z&b*;AxOJTHIP%e9.%"GmvJ!l@K*I73΄fAJ9#g tb8"~y]u6}qQ7E5∋R/uY)OYRւPyYtT|)CmMzaG\<.Fw싇a?<<=uէvF$]^Q,w(_XUTk9HEւ{vҌm^ch\U5 WZ}ju#\BW!lp͵V ZTwb+IWpoFNarnڕ]wFIjaN3:_M|59Ϸ*; ֹO?nPݺt9PtRkB ūYzu2j l<~S'8eHCrG`ւ @|Z00.-o6׭A9b0UZ%WK+\޽~qzZeifb\=JcW pFw%1f@pc^iW\Cj-UҸ^!\)tʸU5î\-îWJ#\BH;hd}\Us \Uk;\U+Ip HZ;d vr8쪚PZklZW }PS \Z]UkuUҌ5•%g Wlsf0pU% \pU4rWWNJ (6 ޺oqHCbl09<QH!ӯI(\U- WZQȑUJB N=ӷg~d7lsl uKz/|gs@Y  >͇CG2ߠv6/rx9_{&}lմ./?⨓r1ThQ=U*rP leHaM(YH; @2&)JQrQK{UBsR|3sv9ZY}^u߶wc,HOŠmixۿ.ۺ4:Sҩ!RA[PرfF86jgqT"lڠ2IH耵GAkmʐɑ,Djݩ9 2FJ" eY}Ag weI}\V}]F%k7H)xLI22Yʌ8VFؘ7C\l.P9"@,< x_HI@f kU/]65ؐt &rRb9P`JF%ɠHG͔kLYb9f5؀t&ڥrd~/k gqc[ g#tcMl {lك@ZբG3l^x&>̭V{=zxt#.dMpY5PHʋvn#;z!vDRR{v_!oQyol'=0)u#C,N,#j|[ =ﲡw Ï>ۙ80[R~8{!,lY^hMGw=W5y1ZmLLѥ $1Ya x ً tXZ$Ec u2*d-hhWtrA"YA1wYbb|}Ԙ7ΞǛcw}sS) ax0\]0cyH~;9esK~ϑLS+ǩ `Ï0*0d(ق%Lq$%!X"! #"< "TY^ #00hm*.D#1ٱ=je!7 (1'OS$t61 %E$jg'@EsZVN.HVŃQ5fÅRЀU[]3f|^\=|;ھyDm|>]=<{GQ "^%m)"1K5 >`_Fipl=^to= >t҅taW8;nפwy7[ [Tp?myЊp,ѲӖ ё:k䗇̭29;ݾ,.|;yQ.dQ R ,-yRFAMfW( &ٰ5|}皍{ (y| SXGLKK9b50UzvZ|֭SJj-LpF"Ra1JGNzE), ɂ% :ؤ'5 mcTmsvhmZxҍpԳ^ l"5ry eK42(i#E&E&DWTm)"$j$MQbU -=p_;!,PZMG>' 1xǛD ^ uj5Zq~1Iu;"D!c :Fx7gI-_0`)lZ(˘W4O5h환2j1ۻW^P MP ,`+qE@ɑMŇl1 :i*.X[R6 ?FQ=ک-lM$PAߢݬn_._ܦwD]$"`"`#4ks"oj 6͢jk^yW1`/~͛?}:]ON䟣iT1v)(Ձ#+)$ v?n$mw}᧹spi-@G8_aWm1kb-z~?z^\#k5oa-VϷk%rE]$췢 ,-9)uiӬbL7x&oss/%qhsk_<ȯ=RT˔qplf}[ʭHKr֛:Ï=Ki x?<ʥy bNNVoNqqOrOO2qg9O(&DGZ+ βDL hcIcB9L-Pؚ6Q(x9Er ]Z׺i6|RI#J -;g]M~0ķ_O) y~I>FB 9ż|w_.|Zl]Xq{ ĜABG: t!I_d2YI\X2a=R .W3c[`@9gf%bƛ R AtmS\>gkB%HR!56%rER1&xҞL5w1l6=n!5G受DF)T|qQ˒NZdHz4BAX,d!,ֺ6 -k@MA?YJD 6-`ccEm6=@jmi)0Om&'Euˢd,//,LrgGN}J7wSTޣBHPxZ8!4Dv#wR DC 6³JJ%Q*,b*Z*1qPPd^JJ  g72*Ͱd q'ottg2 鬋/ݑO8mp1_9b̊!֒Gx%XJQIEBFQֺ>9RBujAiUaS[- )Jm'@\Dtpv#v5s(L:Dm%ڢ#j vk+ -HPGl*ÄK_eX (.Y P7b?:f؊:;ƚE*+#cC*&Fuvugި_% U`LVq("ƈ(GDqlR*31܁79DHV URPBi]B!rI(sـT:9'3΄ÂlNF8$ĚC g7"~8ϼԁqqjX:$_g3)9uc\T#. fxS5#X6y΂ˢKqjeTvŧTi%!4C8 vt^tȽ +u\Qю{я/&Ҁi%@R(uRl.i@+tN]BA>T$=z _&R6R$O!BdԃdUj8]K\R}+޴>}LolWzۗ1zRZ'~»svČ|e*VNm?{F-0m0,&؝ ~ >&#z؎[R%mLQECߙLGq|OEnvZG-rؒގ̿ذ;9.t u8|B<zmݼれ!XE45/=ѪB [aco,ҷ; VR]:S4.<nV^N󝎠B4h2eaBy2}S-RF^xc8nN#IEN+7։3hs.AE`j4G|ԖhIlSklvb#ѓI`|ei6/&aX)}_M}4OJΠ*m1SC ?] k!]\8 G'*Њs4x xIQqِhPh1 [9/tg$=a]J|Ck\ypP3gH5'ID$|@kuM> m{h략DI88Z8ϢD46)Ri`=58rUQhH$Ҙc ,5gHLI\"|b,9A]`-o)F-eԉQ~Xd?fgH$|Bc|d$ZcS\ Y2,c~&&p 넷᝹eٝYo*ِz5g;e> (۸ƅdLؖιCTHwfO,] #è}yނt"XU2cx)G_ ۙۘPf~03Zz%$"Pz҆''.s\ ԍFTQV,2Υ :$Iu&AU VcBbkǴ҄lkG!A{ nʎcd}渶?j/`{ꞰV}.]8U`f-Э__8,C xIIL+$&Et0hWg;P@P}fY&xL1fAQd.zEgVRPQ2LGuGC4@\9%NuHr'ǣ`,=ZzڙFe$ۦS7e' B?U'>夶6-X-2'/^Ey$Dwcg9vZ$EhC*\2R$,%ApOCIRPcB5|S8%3x8 ,NH]BPȝ^iT˸5r6kM磫oC<~enx~#5j~ Iz]'pYfb׆y̮z o^`8Qkٻ\I8'(Cs!e׼K,]?.ҫT(&AϯͪB,YbrƭWjxyYx}c|j~_s}󷫝:>ohil&oCꚊg˭soY9!NNByCk?v۽~rl({-d[mootΰwb3 V'pOuToT,lv&ď/10VW"+^o|v]U };A`=>ffYћMszy1,?m|gK^fnٯM 6u姳7o/zy󉚿-kd:k{ ߼.e}ZX-iȍ`:Z\rƷz+ݥmSu}Xbe{PO*BQ]:#~SZk~yqQ 27!9q-.}Y^B&@&=c¨ A2ÝHkM(DV(H=Lh@4^vzai4cc#dHѶ#Ezyn*puSO"w%8=(E'ڀ \KMDx, Oxtr84Ę# J} QhU1FS,DH3/6\z;D NjCR7LgI 0s2.HV!\YW!ɘ1#1M%;PRqXu2v@2,ۃ=݃i&Um-+ v/{vA)'PJaF~E!Ri1D+!9eݩ.vc[ 1HiJbF<1V27Z2_o%!x^=397~8TUuař !Cv-~L{vqўБZw1'y1v_x\z*/\чŠpEvUPm29J<c/MeiR;; ܭa--pH/ <1h 5SjQ)pHT$ZXR2/؅ߛ]C~x.=dyy^ns.hTe>\8?f@cD1DZPi4ZFE`!CI+Dv2/ظS b؃b}B$J ml%Q#kvPg dd%I#yJ?ɇ >Xb}frdY6+> xL(aB"WD%(Ѱ نXKؒ&S=f Αz .G7X}5?c2XC56+ &ڑݹ.BVK5x1~SЅEW yh9h}X:+ ǬKI9)7`brH6ic|>ݑ)oM1ҽ,;Efr4;?-]/J[2g'ᢴ 992Mg'cEhV20N駒Qá$^&y˘z(3|BOf.9u|ruUrFzjٕ iE^Nl"٪ =.T_.ϓXSETROeؗ8=! )1{_J=XgZOq>w}!6d|ڿ ұ+f=[z|bw_Ysj׺Lzv ltw6'|C }i䆟kmu;?|m!Ǵ/S#~cߛ,bsM;;LG]).|u9(| HYX%cQ?ĥ7r2w3~g $%pvXLźSwa%-Bm}-ϼI0zpl 1kńJ BMcNX.WQE*JKtfTp?S [R&Rk:Nrnzޮ?~,U:er^qZ=Y&_k/L Fw]a${Wޭ2G4XAICehznm !y, /mWSݤ'lߵ:0}qz7ymy|_D zyNZs- qc3+)g 03<6\p,ڤ CJG =C ke)eJiq)fN H{%(&Lܐ草6UchmR iȲRk \g@] Yxoaa2׎}=!_+= %p3k qs ‡NBRc@^Zֆ=3{ObzE{=:)6hỌrÀ:͆EG: ԙ8g*pN:yg 0LXZc-5)2-ǔ.} {bDxҡgOFٱ]F}L{﬏/7јM\6A R H:0S,hQCT6چZQ0Ԟk!l%xf#RU,Y6[K톚Φm3Ւ:- ](t,ddc2Sl_|Z_Cy㭾)6{tʭ~Llk2>Z U;j fǭ,mmU|9(CW8Å7P@z[mM1{Ϭrؘ%zR*3z]6gp&E͵t+2ZrR=c5qv`쫆UhҌ}}!VvO_&aϯ 2^g]lYs>B h|6=vVF$ȹQNp)",V^|\R(x+u:@{. &RY+4x &Sl,fu)`lZl~2+!xjڱ^F= u9!Z{1c&FK\<ِ3VFGҲlO?JW{EȐEA&C&&HM|UddA"ͻ8ƣI}zD^##qO"x'QRdܞX¸#N! I&Rf DR }3I,H:in$aHZNB>f&vx>ѫgc묦%EY//~q;>ҢΥ5>QY픈0X"(C,rξ9E$5ZMED2d -F [r믒vvIgངb K,H (!2 ,|dthjgr!FIP DMFK3:RfZ$CYΪvW`fZ[*Բ  z " ` e> Ɓ_,eb0&pw/Ҵ'RpN\St$tT.@(9>xgh ٖ3P !N⤥wYbrYO`BYiG: e0U@ EqD=GqT>džv|lx7ߓ vq}Vg~e_D'z=C'ۭt?|,X0ҵSZ:o}zߵbM~`bl_xQ.o@=)}5 AM <6Y|-ICRw([n5r /ui6ګ*֯bʅz[ʡf|Ц UN-QF<(+6g hJ"Z#s\ Lׅɱy,v[&;>|*&]T|hht>[oɭԆuoG>(dݫ?'5qbGer֑7b^%jXCԵvMñˎ>v-`zsze8TS5ս ?D/ T sw6G|fKzoCC,͛E: i)KCݠ)$e. *xw_z}wNZy`EN4SQ3K/@VGT@[v_ګ[~q1=:ԵЮ"Gi*DVўO[pm&G׶̊$upU"#u; [< [< !h.9S$K)12 eJ[)f'0r[z?;9aUmRQ$}*v4,E-s-ٞc"s3d~yM+ t8 R!7[ܤd)?wKު([eI*I x* ė.鎼 W'ӓ.]NƍXƢ 9$΂m 2vc]p&ObcAPR+˼pQ6,+:K *#Qq іUae )Kn}L.HGZ8YLt4al5qv䉛zcffZO7v{MO='tow=ߍ׼熸݆x|Cyl yz 1mȎL- qni$h:4 y] Uֻҵs+e)rh0 lN*hIQb́.&v ~g}d!,Z7u|m؄|K-4~J_A-JȞ6lc#Gl|Q s>855=žDvj`J(YT{'bYM ǣٜl?ZA {O2$!(FS'!| 9Nc +R4dʣ0Яmn3NBDDYkE)@0hPֳtVyT_~|eΚX>GcL_Cxˡ5ֲYw ~q]S0߮MǹU!>ԃJl6F~WO5vɿ?lM\>myNR6] #Ihy_N!c~Ӓl-fȶhR);rцviZ}#[R"icH(]E}ID%A$iQlb D1!N" }8rU սWRԼ2zj"@x!^ʤ3LEHQ0;#SMQn3$D y +_Iws/ %QQWT2Sף*ٰ& TJv ny%W ny̶W*p_!n9(O^C}kEuNpSɳjKjA_F~- u tc;%~=c嚥K>r*HţjD0ޮOǽ4͊eu|:6m|ȥ7gBIG=YPvx}<H)B&Ȇ$M4ѳ2*&P>F1y%ړR=\mzB0sM QpBpo:r|(NA\І [):Xa H.@AH[ʢ1tEn ɹ72^)mM^8|vqejgkqH%xAi$S}p8JBRÜ(HFxPycA1 F} Ɉl 9ALn4NF'Bt!D=/4D4d )49n[;ҍzsˌ-nE[Mp"5gV<}$E F%)H#t")sF$2cq!E'3=ʃ٨ziC!/D, 0{)j5AyQ+ űšc^ZvgCZ+${4V*a*)fo:M r˦чPۈQԷg7 ߃Ȟ>xї]l>+W5+U4I@'7K7}AƯޯFS9!/4Rg{o2eyhz4N?ٟGtz<g{兞Qχ^_l~[$,w(}|zz@ғ_bѷ lƃHGT|^h_Ƚ.,H]N(['Ċ 4Iƒ֢ixו~'1=_laxDĀx01o>_RU = Ogq/MNO㯇+MVݰ_t A SlMkU+oS]7ӳ9]?޶tm[Y4:B:i']]Mq6x,j\m4Ֆt'kӜ1Ξ-(]}E޹dvK)KQHBF&^(QMdϦjonYݝ^o߅ʮZDR2N<aQgCD~2$}u}h]bYd*,jZnYv(&uYH*@Z~ąId@<Y:O!I4yYzȹcN& )#)]h-/JHҰ5Xʳ)gXY2U4)")llko4KB],9D0cZl.QR3"e ] xE9 ћ[h|\BRm!FKA9)1J>P0'%ddNM1 xdldR K&lk dm( !2@~"^}1!Us8/=D Jb Ljhi2 {O9ں$<:LM̳* FPG2 )b4C7;d l顔I@ OBYsve6{c<0Q=Gz#u}җ/(nXOKkgPajYX7gK̿KusXY C7`;qS5;EV}jģcT*WqZB}9s<\{l} rj\:*۹/tkm ux-ֆ:EȀ)kI`U5??Jh@BX+uوzٔ]~O]trR"h 2\^).o }DE>gա[g Kqa7xk!GD|v)΁z3dN%Id(D .(y7UI;Y9b$?B=I g%Dl5dʔZ h%REJeCKac0*gAb[ ɌLEePTM|Yh*o ,HHZ{ȹrW fے "%[Jώ][o;r+@ZHV ا`<%Όl߷s28ȧ [us$U.䓈Ή(Bb]uUEP d? r_;!,PZ֎MG>' 1xǛD ^ :s[:Ax3AOzEogPb"ʈbz RpPL$*TE!KFpUv3'l%a/p\.۴O mq&0*K`TkLLc^I4^ӼԠg`Gx඗XDxl_&yBEOȱ1!̘}H(9-&R'm[%kK&g`vc]~4[S+ ٗ;1weZ7Gg;&]d_4pH|7 `,WFgKN&3Ŕ d%.@]QF*CK"(*= Zڧ}@X_u/F)A,|s&&o(l ̚ $i if=uHo޺_B{{E>eW_F!tr{'7ܮn'ӓEeᶾ?nkWxLqϫVwGhՀnZ{-և)ߓ^"vŔc~ַꃺf~w~tK|$48#-e[~aA=[sl6zi<=_HUj=ǁQ`:Duɪzw C·zևmC2ekP.9s*Q MV\!eTKĠۣ6F*;Ojp@&*eS:K͆݊')JiWCy^;B~L3|  &C>O3G}e1 z(1grV\/2_ @7Tpt[6G:D UMgs"x,5Pd̙pن<ʌP'Q\ EA@6|dpď֢ΎeM"ZFe_!Dd:;:p^_h ⶢ}%l, e6)dWٙf~%*$@2kZ2_(K(DZ; \6%N,gBaAJ#k fubh, gDi7gcu6}n, 1xO,<gAPJY B՗T|)Cmorrh}!4<<[&4_&ȝr1;vuQގ;8 ޏoH,:]c^8]b9~>[""i]3{T:%|IQ򧊟OHo z+7DR?+z mQ^}4]XF{v@ZoR½nuME:\#KE:Z[j桢[,g^AӛYBH0D"@ZP SYK4 #Op|%/Q¥bPJHEJB#-|V"h颋pRCR('W$VmA%f%{6SHZ޲9Y]Z5gr.kB[σ+7JWS%蚮;}-c>֠]27R4 K9{jub@pP0~uB9ɤľ,c%/4σ^^BDŽ̄vJ'M2oǯEwS^w3?Jx={&8@7>Dtse=||50L,1eIq72;E0,*bATGX ; TT,5(eL](2H`OA8MFZV4TNւʁ)RE'G-RUT$XB璦!Bl6SN Vgד˯Au+R븻cI J*[TI[0cc>gU&Wzɂ20䤟0񗫫_o:ӟOD(.>Y mjj~9D5a-&'!3B'#`eV!j(:GAZBK8ȳp@1؅opeKKA]1NA0&ϻ,.]0JCGTPw0*" >- .DmNљL+DZ(A 8g0GĻxר$EmԎ<DN,#6Zj]kg<3nϪη-]wݻn8n6=7utclbCm~ v4}cCAqlY+~tٻo3eӓF6'^uɖm7:TFsj_ gCv*jw j[*vmVC^ [~Gy63>o3>oHoY$Ǖ{zj *|ȓv3 ~qEad #V6-=Fѹ9f}wX ^cYo@l~4^uds"6X,)RWBd1QCS'aCY%u,I1hmMYrCr$K"ZWpk6@Kiz,~ Fq ͯ2vOy7w;(~ejU5~/,%RS9KȈem1!,:𤴊>C "C X/oZ K˃m{ZWY9gtP98 NJ=3BTp`gZ8={N8Pފ;+̺SP=-T1thXc8_\$LF3 &?>ִS,s7ud)¤<&dADq. CR9e-#S#WXKQFQCpsDȿXəQ(#s1-OU\%wSkOoNwgiF:nWyV zZ rlj=6,p~X'='s~.?Fq72ɥֈ_/ U2Щ66#O—QY?xdV,S<&iN%"PBڥ:՚դS?atziQ7 kOG!rY a~zMy!/oQo|\Ӊ?ޘ~)?[7oz)t~~W`~! 4Ԋz:56ɿ}M~Q1?ٻ6r$W|;v"dq|&; ;36I-ٖ/=n=@<,Y"BwOVדߢW'xI̾C:2*w>#bqR ,_hlk >nkFnmAnmfsZ|`ź/泯ԪϐRN~cMy훖UiӊM+hߦ]۲[vP漿c!by# CS6g͖/.;H04֏FJch mBr*'&bD`mVgm{\^{= ׻ff Y"L`M/"H.y3$D1 <}(g{AH碡Lə2|snp>C9qaAn7Θ+8\na(oJj!tEf՝Y]$^g'1lRx_dw%d3ԮNՠ#!x x(*d?\2Jc!`@a!XB^E)ܫ.͟ɹh\R!O74>'|_] ?3ܳoףs˫Wz7?[K=7 QCOm99;PTln(^uR:RTliρnTLFnTi5=pJi7$JJRdMeHW,-Y:\NqfO-UU׹C+ր:\U) tH*:UWO t Nґ*}SJWOFyP`ʃ*.C+VHRJ5++llgnz\)m~\<^OnIh z߳za8;j_?Nǫ[_w~sBk l|{~ktx2aUZ]k rԶ}{:}<-zfkӁGФyabG1xW2G/97l*$:Ί};ka6Sp.YL]΁4Lיy6=x6~, RR+< QB0&ȑ Z k:%5tP|/í[|x3GrpjÝcy(V lϮ>.~gR_V` ZJ%%ISH/1EEܳopJEFaL|bwz#QϼY# ,|mgRsV{8Y~~Ŀ}XgP!XߔřG~ß|?|w-JIzT? :}o۫M6|^7~5> sЦvv7q64ʇ92Uw4UwzVQ͓cpwRSjT&h!'yTCQ"cvN5WVp$˓_㼊A()&e] 2{ ,^( 2ɮfh ?w2(PV*NJ+sAc@oѷuJsU J>!eZXrlSgrKۺ()3E#q.%LB:Kd,ާ`dT&Vl-/VNsOn?> E!fSXb SN9JPJ*HTkh) [m@a 9+CS=xe i21붠B9[2&d%S8ԳaCN|y(IK %ߪ<H>Ȉ4Z+-U2@} ~Z87, [G$#FE/UG$ѤTՊdzNK9I>D@yZ6Sqgp^Ε$"#F#pAeo6'I&`ߪ0jdՁW! T@&Z $H<Ǽ^9xQ)Q툰lI^EM)-#cD)J\k 3 %!2d1%ecգz..>}jv z*~xEkn:WLbu"B@,XlLw1 =SLiۯo}/u5C춥g2<1ByZcD1(Ǔ_~_ӊYӵ1b(go9ˎB.]H $Kuf:I3F1!T(*t9gf%b2ޥL ٶ6gE@rNA}f*"%g R1&xTƬ87Ξ] RՖ?~tx\RV苍 Jv.hPIlI6 ۂƣA FtJj5™5u!ş5O)3@0Vi-bcEm&-\ Ra:G_lrrۼ㣯mPRnVu9y|ONRө}î~?+G[ [#@"}t)h%͈ab͎Q[-ň#nƣ^$LL*ÄMH)T"QE `)XqA33d+:lkR,)&>lTg[4f:! /S3F+KK,y͛gUbҗXz|FCu=.q1|Lԓ鬯N/r6"7jAܓ9W﫟?ϓ(Bjkٶ–xqU 9_:Gr mA'mp"+dFSQ" 2Kj{ʣ X2T>ԂT(QVXo N8!9Ɋf5T܅Hi.aaK%,OZn?ִ Q4< 3Qb]Th9:LЁD# "VDrvJ T``$e+1AB)fM\>r|*G.´,NݽГIwANXq|0&Lè?Cy=0'!ev%bюcu{·NHXxM$(NAC ߽FsdDNεͤ7ߜL/OoٿFp*D] GMiF9ʻY 5k?3r)[ty9BQ!1BihwڟZᐳoOdi`+'.rXl(skNuN<ߢկ'o^]M/=bA}h\ϽA/vK#pմ[|}%ƺ\=aaekYf(R? U̻pп-d1f709Oͣ&nus'M=d$,}2j:KOhgXTy%f*Cϯ$d7Ы74/tIu  "j+\:%MPU]T>UlBNg5ޠz ?޼Wgo^yF9{g߾}^I!!5Ii4M ͆F04Mκ^]MNy͸3.#\-.cB PzPeՃe_lnHtna;!)1=nE*b$1r21tk#a\RzÑg> Cu.z @$^$O'ڠL{ȝWOIB Atk9ck9HR 7GRe+$a) +:$8+8&reŝ(!Ĺ༷T fhu3@-]׻]7WxAF&J|X\Bν~4$1l]`ruQ=.`y-5{~jb[5\;r3?do}O7, ϣ[rE[:m;nTY3 #9鮉)8MCq!meTk?Krkmg#4]CtŨPhaVJPwbp(EBE|fKzߏ\z;;Z=YyՉ1kLqŠu,D&'*^(A$藗A=b7QTɿʹ/RN]UYO*)M9WSIg ݥjqjT ;s/G+~sOo~6m{In+)֌/CTrQ`Ak7TUb!Յ:pJ@陪^#z)0V\myGMrwF Db^k˜@ BmD#1f}I%:=MN[ -%I.'F("a69M7{Hk=~1Ps;PvQ`dVh*p-6Pƃ`IHM !H+)#)]pNM9 D90))b)Xg^n^aB[H']ɞI 0s!$#[@щCRh:֚߻=0ӔO[\210e)*hy7OpU+cG$c%WFk3{khk^2caP{LgJqB_IJ|D !Ra1D+!9eSX3=[ 1HiJbF<1V2i^ eo%!x{elGV2U}Ycߗ^37-^Q_umȱiezi[7=:DED{tڗ2DL DMqo3u ^|Q|&/_'oU¥ᣟr8yFwCd^\iVź㓨YEQ墤LQ:*R9sK0V/T1|PVy} yp刟кmJAor%^|*\f=e||dC'I|r㛟FA(pngOnoKOD]4]t85T =q$ƖNX CCN Ct8eG'9}d,\S[U([&DՄ e$Qk՚Ajp bY =rHhw:#CJ2։֓T׏H鏷 7'axk=r?BOsMݠΟS ẁ5SU>|^VϸM7пz{LJ|T3Ǯ#Tƹ5jl:7]r_gkH~L;Ub.wSX\!NLS ƹ3 ݵ[YIl=tf{&/3 ^@;U١Ѽ3lw޺Ǭ'Ե{]w7;Vu`iaD4aַ=RA2s Cdy.H"?{WƑdJx3M}0B5#ǎaV0&7q @` ݲuݨ3+eVe>E5QAa! Sn瞈ЈatZe~}e:3jx%ًW;02")0ϹJM#yLGϢ :Ѐ3WP\smH% t&گ1okcmE[{[#} KlYŅ'^0Yh8Y (BÄ e 3/q&nMDY`$ $7Wzk 1:-=E'!O'Nmi>bG愲`cm 0 F a&5E&D4")D!(RNqb1waʽ3ø)aaXs^Wp}|7@,:1}3,׹ ,?YŇ-1XU5bZz$aǫ,=nC3Ur@ܺ8꥔-El7/~\v'Qf.>y[7W`N+.1,bMȱZʄ2a^pj& Uaw(4PdojgoBWk.'.A]v)B6 sph|`a `m1ɺ &WLrJ͙"<Țau?bd.[Xh~7:Rwa1U&tYg@]_xn?>{ l -v YmB wݔuS_ϽCCjkj2G_?sv g|͹y@0.`mbXiU t &cvy$"kY"M-zP^.j#-jaFX$g-ZQ#/ Gl-:b 7H0)+?~6B䌂*gRbr叫oTKzdPn zr`xpeX A[2xo 8LoI T9"+g/8yD>~L}R*'1)Ii Yg 3S&nfVV).tĿͿr`?2Lb,odpŌ4sй{!XS4T| d$*kyYD)gٟ~u~)eLwuἦ+\!Zݷ [TBwHirmR m7#K̙xd&si8*?Ґ -gsg:>~'a'@YMޖsK}<smq-[ևu+mRB?-ޛ3'd3 Vhrκ=~ۀKn"'Jќdsk,Q?p2zܞqC{j}pd,l՛`k,ZS|دq TZsSwc#*ЭA{^#O<{85ȓ^#%qDNW E%&) o% fҶ/ D6{IemcS7B~|[UnN S3ꔗbεb q) +xS/="J \fWӘG"huleF Fm6&f Cy1>:d^5g ^NitcM<ً]_QҝQƮh-#Yc:\߄^9sɼTy`ct+^ItTJI¯Kܟ&Ecpa:oIxbBΊ?֟+o;3zA>{&OG }w>J췵Re cPVEвk:L7gx‚=,賿T.|P> [ Y` Rիve F$ѥN\ Lsюd z[zQ􁁡ꈖE訸Pi?a`;Cw8ۨB`B<UX S띱Vc&ye4zl5ͭvپƚJ=χEjBEhA8x|]2&Vs |n,. ƊrCRmvE!HȵMԺ|`1p,4C3෋OtB˷Kݎv4unb g/7A<)yMnWYdf_XU Ka"Lb6բ9g-3.՜T!Yy˭ɿ 7O3>G rɴB0rڀ4ct-!9㈂x!G. {fBkB$w"OANmizdHUK% DPfx'eԖ%{,œw^!w;4)GAՆ_/mV+bhDmpC\h((rmAza-+ p9!29s>m^F:LQW>{y>Bi_Oͭ$vot-S"w&b^oqlWqujpǼlrȚkiER䌥 sm0k!f}O vՊ#tRRhjn2UɰL / B1DEu0M,2/+- *EȰF1)QIlJL1jveZ†gkYØliXN@$jwrәZauݫb1MzsQ3-9aәȉ :XsD(86,BBIf4ę&8bu` 69؊9YҶ=g>c;"E RrHd 3jei%1"iH&6['Ѹ/]@%Zrw?tźX8*=bQf% 1%8q&*"X{Na=\+ښ'ȜUd8WԁdS{/!mHd/Q9W [O]H.:b~]ɎHP#e8NBpk?P( sTi[;;e`\2{gWUG79.cF7|z52pL*<ؤ _^E~/ 0֛:Af*R /Nחs>dK3J%sHDp*tnFpg1(yl(Rz! H aIV7W*Ʀ0w]xF{7IBLŇdoRzN]| Jx#K3fSU^.//wrb\qӁй-rRb :Z120z-lE *p2![SvV\kc|xj=i@մ1wSrls${ߏ R[ӛn7hkb|uMwuՐj& {X>~Lv~8Ѭ]gswN:YWk]_(pZGɽ&k$UA|T4(.9eN - ~M5SQ,?}vF.y<^'Lr6fAU_`û23EVQ_O[՚ ԟ.vzuH^}m7w߼}w>˸=ihg759c_}}Մ7U57*E7jW6zT/AT||H K^:"l 5Zh#XO1UH3 pLwJ ۲3^%$EG00匋1]thgD|\m|k1%,V{\^RPFEDk'T{ș(Z2/34& ~a| em;J{i_(^YtʳVvsm`Girlks~3noJc1F1sɭHyx6ޤ%<2I;STH͂d2KRiJk+K:\+⤽р9qғDꯃpoSdo0< =pļ;&ֺؘ0v]\2шXX3$Hg F(vXq16jh$˕U GP T^py΃{T09 $+QXF# MiHJ4>! ߹ϭ yw]HBw\ַ(*xi xpQ""ƘR`B[1GWıh iF[ߪZjצyˠER K2,Eqd iyMr3V)#k1Nn:8}At`" a5%G&wmI_\|`ۇ 5Ej߯g)RȦE4`f8S:j}1?CR+tvS㐜uB=)*v Y_*Iۛ>. p)]0DS1g#R q5%yA1^Q8'0p*\, .RmLDFLQC R Z1/! kxyR>LM0U:rdʄז_T/'&ǶT{|S?'r+m#V}9l^ /)V}j+Kp^I/aF-em!*Yĸ) yHoEMN^H_h73$0vϩ[@x86`B75Z߃/Wc8卝YHᜟ޴lLzC6axCRʽ.՛dqA*ob~˶Pi3N nGM:s~sm7[#7iWOVXJ9Sf jyQLztoz6J׻ҫ\:}͕kl{ƸϸQO*#zV{ww ~Q}sMDPՔ#|?]T/( T&(>4oZTH+u|.>i#oÙQL 'ߌK7M |7Rf9ӭm j/e'3yCD6#Czr%4P:}0}e_'i,/o1.Z֗It~[-fFw>:W L%a~ oOj@lX{ăN=$DM&_QQO (2oGEG8),1EE5QAa! Sn7˜+x//xPE(xh <^(4X jѳzF ͝lAoK3H͍mH^w&/z}FA3[Ou[#i#fpm%@Á%agEujMgY'+m*{zj%N쉵6\xSW޿!TNj^҈&| Z`/{NRPքZ`bnk4I,Z:'náAZ:?VS2p0C}ݟ^M!h7Ś9k˭aeFcʠuo}[q]_wrn6 *Ug΀jf:W!x(̆p ߻?;< {TQL|w@xZ4Z/L䛦94k{+Q WYu/2.Ӧ+9b' GsK0ÛAsScZP}PaJ77Iu^|[a4&sѳ?TMtyGUѷHv6Ta/tmiUh.}ҝǯƏ5v.VAk/w$kye74sb<33Yecė=%D>}&:poo OK%DfMwlw&31s붅GQj-V˔!E.ԳV϶rHeNsd.V=E`Ykmf"2W f;_͚7N2 )mYrJTDdYMpcOu)kaL>V Oΰx'!6Xq3̈́JqMGEOC%VTHMLc1>"':x LgTWM;Bk/+=5DP*h;,e8L \L( !JбRK+\{ 1Ȧܝ9}b#xyBJB')2|˝ݹu؝H ʺb V*"S>Q25 D4V>[US9eGutL)k 8Xm;5Ah 5u\6CF QH@*}A;\m. :Fb.z[C-(Z-`YpEs6^))%ʨi)RI^SX6r 8 (rZGEdQ0Ap"V"D4/&]L@#>S]=uuJԬwOϘquyu>.#9R.մϚocT]b,{U Z,R+bm=6{neGsO^g 9/탲dh AHRVa S띱VcB^ˈiLcFs+Rhra8Һ هTkoUze&JGq Z{$TP1,LT8°  F`i/! b <* ?s~J$&$S`fOdVblr@?dΥ u)6O' Wۖ:j0}VAI[vz~dsry-Vŵrq7W{}?+ \K+`E 9g6Q8CSci$&V*M8s\f"P2zE)B(#'8>jK|ʪ9(-Tid,& |bXXlfl0# gM3^\'nÛT' Gȥ$7"sL4N S9[A%_e: !-'ZK(IItY$k;t$LEt!"'U风RGl;%桠vٱ%jz(Q["<: CmAUԣ(ւ* EZ6%r!^m9x* k|@N@8F{`Q)RLxEy!Tx/5?ED\xǁ{#r[0^"J{0IE4ڨew; @g($^2x3  ",)衃4arCQoj 8[:bd[\qqy30:NV+C@VH(YIm#.Yam;Wà=Vֳ˫tpCGzlҸVNne(Fa8k FoS Y^Ә="4n*0 hߵ|Uk  gU&UT5@ <]]yqut`3BU\` ?->ǒY-Vd1ZP5GE+-~lqmG+A 6C-0@ͯW "ePNPqvnT&U uJ|$TuxSմWM Vm+GV+Q0-/&)KKaԊaT*20s*CRWH/F]!Oz/u ]]!ԌTWaqS7xꬸVax9N^J& "O%?%| (fhќ,ߡ%ѧc[%b_{Vk0D݋F?V ЌGQ1cj M!@XkR`mϿNj(C1xqy 0"-5g`Rnڰ&lbY1]*wO$:6DHn{ yD<|k=ܤsUYyAP @5$c8TCj8t#T&`aًQW@|9 UJGuzp1qf'ʖ7IKw5GGÛeu]i{XU" ޾w8NRKGT(MΥ4!}j6kCsG{tҎ<͕Fu%fMߣs͑ǘ5LxQpD) " AT"Md Zkb zș\dTռs.F⻣!8<9r.t?rFß,炒yܞۻmoV[kۄawtg/v5'w\n8o'$ç)[ײ^;M&Au'U" ե:Gn8AIlZF =Ci MV!_ fW1Vz&w@i"ad([*CPH ucg78+ցI}qOy!.zpq3#Cgڎ\&ׇj;jχj;*8,*}vzMܘweomv{_竲+}$Ja,=֮&jB;ρꧾȼ1ތ7"h'ԂrSDI#>X6Y9I4,d^$>>&"eN2rT*KDMJs؁ʻPBjQ:$S~ۭ|L.|Orі h'Kzڢ/%֯2m1K Lw!Œ (7IaM-6P0}TACĨE# /7l<*Q1jA4u&]Y8NYsxnO6`}0ŢQ,`(ʈ3{3]nI+a -zdvS1p@0Q[- 0P "KC(q*A9iFF5N,1BԵ)vk@}pvJrs>ul 0ouO ywMӋIw*CJ|_E(___͛ߞQj0BBcT RY3F<-|}\'tyWG?W*gM~8sIeUQJ1b{N3ϊ:hgrחU2P*7K3kf؞<+'VniU₊沎T7yZtu4Ypu-[E*=Pyځ_Wz5_D۔nH.Ϳ%Kߐv,7.`X`CkF8d5ݙGK?=MحYfv=#1v./p._UϮO>^LciǿꥯCٗiYo ЇɓKy} K>iLpEΊcmHpq_8O/\K| Zs#BTE4߻geWpk)gkuڒpݮv]%i])]"T:얁kT-zDzY^1m-fuq;c}S+hlƁ*%ͤ9h^ORS^,Lv|$4&m}ڡt@<^O[Uiw#y5f*2q3mjĵa^WU>q_i"hؚwݾl C옌;B5#׮.4=c'E4IdYI<Z;gE62 :f(^Fٸ3<}}H R (Fі75D8 ,JQL! ) !Q@(Pe-u1F7C ;~[ƭ-tz՝{O.-m6Pv 1 .ozkBRpNL1TuT.R4>xa1tZJZa1Nzœ% 5:$IU ?HdEڜ,,k hF18xb콏HӅ#MiQYK+z3*PLfI C#cHǭ0Ԃsl5K(m$XJ% "&|GA(aIۨL$ByehTI&0@1x܀2.4*v`&&' ݊HY?{* n'ClL&L7W}%whl[UNPAH[Ǐ|<@DE.E 23|4+o2J[e]b N~.=vxAN cM \VyY8G#,Jr5ϱ_n(v ׁyCZڅ'U|Hbp9kB/a+.}S_ӏٗiBcw㶻׭x9)ӶE5 R#B[?Y||x4X a`3A=Bzq:?& F 1:,pХ4 ،GaqIx>'u݇Z CòhbvI&NEc剢zP8#2Υ *$K1=͂,BFL{ V@ X#|Y,y[R&=zأ+m:=[nbթsxbkdi 6)j[ =SD=5a2u6kM6T*`LH$\=ql \j  kŒ9]Zb.\ִE–C}`L¡ı4y2f#лRp¨ ůMUr$=Uo{cVɲޱ4rTX-"yTC:;Qp,6":z96`JPUDf#ގۂ?Sپ{RK]ɡ V+ܾ{{v%N栈ɋ `o7-_ίB{?9_8EcTbwA5@ld]I]CpbN]bƢvo\sHڹ8jz'IM^ HZldf8Liq(d,Xٛ@_クxeI ߝ׋u>_|C9)!q$%rһ:6X'i5}; C:Kq 1H%&ǚ:q#njʳu2a.gzKb>Ԟ3sQۙg~f6Ff_]v0TBٶ\{%{5=9Nb rKfh *5L\#ca)ڃ Qݒ2͜pW 0>PD>#3"-"PɅ.([6j"5\"yKgN&P|Y:ޜٿ{.﷒ y +W(- @m8oz6Hkrmu@uWx/X^+^v/{{s}Ih?ΐݿ4]nۧߟ~YZ{5}IVzsk6] /ˢl\娠0!F-U=sw0/w7{i40 X, G'̓wNv7WNxfuo=ˣWoL~;?-`.p5PW;SV%G쇍}#DѻogG{//Cw yh,Ud#JgDQeMUßȇbu@!l_H٘77O{l{[9F sk| 5 KDfƠ[(w_CYlV<᭜=.&97ưRYv nuj\j ^d KN.*+&x!spt@f? &>X@,V TmZ#C@j ahg;!"g#qjM\jj08(pPR @4&JA49uT&FXi BРIsL;K0%p4GQF[5hg'4^Xǡndq"p iqd /2b$f=a.Pcj>6R hd@g?u;/^{ .3fɃ (JCZ!`#8>ÂvL'Z҂7|::FEw zܦgj!0pxu`u0-pYIɞ$P*Q{@:T%Ã($O(v5!M5 .b$ȃE@(Bd),|HrqfT:^7h.HP$8eV`ǣVP܎m@SK U2YЩ֏L>ٍ`Fȥ6$5tY anch6ߞ%wu6#!RmRGW}cuOʘ zإMRvg}'Ћ:"ʤ Z F^|a܌Bc,9;Jr[9@vUQ:(t  *kI 4H[03A[#hm4^Ù!IH-!nOƢ8TQ!k%VP?zPDM&BQ N:{P.Oa9ۍǂp"F'1ȉT֐/`c3S= wi$F̪ m{*uƣ75 W*">i,%T@ x ?\UU`(h0J˺ c|ǛnqfyL{~|vڮ17e"aNVQ .n%ZGkģ`$0ߏ4[L-e nƳVl]q.%KGU@l`/G#0N3 ԤDWT*#H`[ H HrA=<r֙{ P}saq v8hIf햠ǖ⪐#w(S'ysrŰahLR , #=F:jSDb7~:s uPS\U8ih pNBs>E+2Zgj dƚZ Yl]LZBdGCM(Z}%FqAd=E;vٍ)Se4K<_**wNzHm3*S-"Z3gGܛ;4%”-?oEot.3V4 e&y`70\Qۂk<ͦY <0m!g!sK>*MƶcytqǺCAǽUlM}Ku /]d ݬbByiҡu2#Z*TzKMM;xpIfZ; yYpL(D)L`x C(UB\Kl`Sn?`5"+u˲W7,o?I|$}d]%G~Y׸/70Y>*) 'M&aꊿ˿E*peKHu]A3 !I']_40K XA.Q9ey1Cvq{&]dq_.pga1SE>Y`,Yuq5d 4d/AT{AwAV"ɒ*~/?/0H}EOOj/tuu9^.*U9v{aWVw w0OGaM(.Yꤹwz'?|7r`1^U{7MJhۥőTH7Y5o~t'NJۈQ f{m;0X j-5xqmoH\9묒rIcx1R#i0 X48dP&.O nmmV C^{X)%}UD}y=t 9q+ 4_If7GQ[χVyHU ԷM{Uqz$/??߿=?~<;x.𤱸`x'6xp?[fNpDV }mUp?ݻ`,nڅ87 s?9`yJ›@`&-_<F{HY\yThCE`0-aZ K``9&ӨwBY"Rҋ 3֣).8~{:6yGrs_d,D^XJ)+냋#ꪽўN+{:8>n|ؾ u*ý,kb6=egɋ {Fr򚰊(ǡ$KLd$%(DI&J2QLd$%(DI&J2QLd$%(DI&J2QLd$%(DI&J2QLd$%(DI&J2QLd$%(DI&J2Q 1]$C܀)AIs %)`%QϓUuJl)zuSB//gk0S}]C?Vd4-Г}a5y"_Z9іcv~#wGFzԮ:eTZic|MnprfqCQ?m8BV/nEͥ]Ӭ?_nbI_GZ M \OX/dHnC?߽/gP >Uh:dvFBk@/wzĦʼfW$cr̶fM 1 aVԍE*kY+{Iܲ5zŪR~w%{6(L\}GRkc';)IN vR`';)IN vR`';)IN vR`';)IN vR`';)IN vR`';)IN vR`F; ]QYwG*;X+JcH~ vx=}e\y XBy:f\b&Sp)ן\{8ݓ<370c:d4CUY. y_I/!mp_r#_}cCKA^6^^uE^}ثpb6E'[Au]Dl>V697^|]uc"e(xB -wxAo` 1h V4-]I1\/y|3h>+Ů%ķ8NQINp $'A8 INp $'A8 INp $'A8 INp $'A8 INp $'A8 INp $'A8g/x)xz[%aZx;XqCRm@0./`Cs u`gD/ްR b\WJߴ(T&<ͳ\ \eļx ¾¾4¾{H#dQ yTf:YA 6Y&n9B0Bm6Ǵ(o͜^0*=Zt =sEI3oН' JGkD~6 )tyPDr[^3W2+ >q'Tw/=a9-.;-G֊ hrB(e3!\`QqYCP$}ȜM+np]1jƞEbklpag? 3p85aʦptb] w44,a^2)^!CQ.r%+ls}.%cxH[#T>) vGpzGpklkSq< &x}_|̞Xƪܸ .[5W-Wowe!Y&tPlq}ѾNlrCNQ'Y'Lrm,eW}Q|j!s,`u!-W Np!RY*)sD3G< ȊGyK+,( % d\p!dBi<4wi1l͜37M/AoS,^l:wgOmBw /9kEoC O-EOg݋ 'ujtF>jx> Z3 /K`J6o)\]N/o)t~L?ޚupe W޺{5o/|zh|?`򷹼٧/,KLc{_·<]Agg^15g՟]}ɯ7'!]%s rPKG-v|.Tg碹3ok5g~>\]zr\LSFu]ʹSwW`D+]rW`л\ͻZs +㌗C vkg댻Bkɻ+rIM|ܕ\ţ_Zp[>\"yOrZWW륵zao<]P WZ^ O&Ù~GW5KN5]O89ww peL!UR,O5%se7F `+qX<\Spq$ ,F 18zT w `1Mfԝ9JԧKk?lJ~zt4[ո}wNtg1_ }8Ku_EIr'ϒW j.Jb$*oDQ,T%m@'7N ٯ^M?{KK]h]._[}rC8YPemo]Iizn-3P7]85blb 5jJH4ci]'o=: FtzJX]2\N]θ+4rזZ̩+Ґ:Gw%iUNRdJ+6jUuNwMwg͕IwhvMHfG^v] Ɠ^X71|HPSQJNF~G~^[=Epwc|a5y"TIVjD*ӯyRq \C"o]yVn*s/Y*Sy(S"MqB;86tٯNJAi^S,Dl]6ᄎ0g<\ ̧=8]-Dn;:ug20d4hѠJSFs½:SXg۝3`W'V.]u]`Bs|wV~#+hV)Jsbrw%]p+0X1w;ZN]R:Cw%$^v]Rw'Bs銻BkZ`:w%|vG 6݉\˻ZgN]])됻R2:\ѝd+f]iżfrWZqUgѷ<]idXi%wu(mxv`;\ۙ+SwW`+˅[:pb?2pg0/3ZnQov{n ٻ6rcWTA(Uj9Ӽ كW(N{3CO0ز$8cۂsV%g+ 3tUbP*0=]AښT*>vǺ*hh;]tJi]`tg*h J[+ J;tUv Z#NW=]ARr.:DW0v'Np]V޺*(UztU2LLA;XW誠U J-{zte\զu sXlK>"W߸f]1X琇zm{&VߏӑEP.9])x ^zSEkذ \LZm Jޓ{Zf:DWlU+;W*v*(u'te-(%* ]> >%}CWjϡWق,㩭0k&tz:v蹵ũ|,v Zmr]I&TũkXW誠Ev" ].YW3tEpO^0z瞮])f5+ ;ԡ*h}PիЕL.YWt(NpQAWZmHWK#Y63tUbgֶ޺"lOWoԾ lSr&?06o;5r&WJDͷLũ]D׫׬h _%HncQyånz_svÊb;ش3{!yuk7 W??>ec֕/OnG8* %QỌrCisCOy/&Mh.Vʼ?VevY8֢2?-v:˷_yDknp/*ś>*sا2՟nwKG-?~ܤյ_Cj :a?%q%h+}uy[]V=ɠ_^\$Ι щ;f lSVS =YX|&Zuۍ/6-MvpTt{1-$|J_\JZe =rK.} {hM2Jg ӞN2.9ܚxJ٫&ˑ,>Εf [U>ਿz3iK;)ϟc^..tjŭ\%O !Y a^TԊL공Goo7/'LίzHW6܊s }VV;E"TTL\6̎dmtp++. CŸ6ځ5 slZQ0csMmH*/]/fJ@4FH%k7ahXQϨ&r+?}ߩ1Ja[{=9I_ƴ8OúU2 f@Οݞd`!rr=lBOo1?YQwY}з6vҌK~g(+)h8`(G%sPW8Å7#UNh5cC54b̒ 021[/$23)j\I5͌͌*qacq,چ{.|Q.\!*6)1;n0N7y `:;+y "'#E>g:gi2Z@lDQƖ(3k(ΞI66Ad,&:Y00c7g7c(maƤHY^׳vonx̱7Gs`bdul n>mL1bl #6&2"oyψ=#nxTID2*PӤ1y(FTC&Rff9c\R p Нg|$XtH@#ɝPl '^V jS1)9eü(z^yq;>ҤΥkry1IfQ8w6 $R'f=//cC8_Vig5z 2Ղ90Y{nty-;B*>a8zw']~yB8ЫxyH n8,ٔw_ǟg;_k;қ~$]Eac5OG)A)r VzpyK㋥ {ڜWٶ0BGaٜԘdjCQN$aęK%0{.#[=cu"LQoށ|+2YVId}B?BsxpHN%MCGqhrٱ(w2Ȅ|Zl:)͸'=}^M]bMeߴc*7,qIq^8k@dʐ'M/ y8G!GfS1T=CSQ6ˍ-?zvIgҭངb K$h3Jɹϙ3DzFH-$ȉ#wDaz[&u"Ӽ0cJI2gez=֫?Ղ:MPk^*%uQ4AEd@dd>*BL*Ф ( =x/MOVC iYb5yGBGw&읐[,Acoj5fjٰz?iFXD:&d#Ͱ4_  dEh-{pG31M,6&,>dCeﶀZu6ʓz̙4w+H2G5^5H0׶""fۊ5{) L\1\тXIcd3eZy+uI%r}zhN=VXv37&!jmX;[ Lps25&2'(˥3QjC,d%pyN.ubW$BYPj|Gt!o<7n9Nô<q|HPy@c?6qR`NPΠ\A`* U7 I;lq{Pon0::˳nm6].xlc^ii9F5&򱘂T!:eFT6@L9sYF˸VZ42Ǎ&t{s<续Wid0zRxt4:QbTuggM/>S/dGF?`|=?8rΑ7mz²QQ5<n}e~D2y^̂ 0f]ٕH9sQl>ksLOJ%4nK𫱺;H9{U9803v|[CgPN)J}bv)ɂX)zLl-fiKR ȲRt_cm3d `Kaa2j EbC(hUҲaTko5w>uo%[y"͋E}+*dOI4D+؊+=MD6JʲY]Ys%. LoyPǂVÈ >Wb{$"灁hMG$~ ڨF:Z幉I8%6K v?@@Ys16NZ[h׃{Lu"KK qKf="ڏlr?W GWYr-1.TFX #3+o(`{"zb[ѳ A >ss$K)efKc`%6}Nȱq9~`d˦u!UtGp ,E-sӇ6gwDomm<|֞WT8|J5J쵃_,J,ӓ=Roa-ꛐhc79D+pjUiiv/])*AGo؄?$KL{.}6waN Z 8?iz׀˒]0*PP"eʇTRʘ. EM;vc2Y/'gT3XHWi ܲtR*4{)%bVY Cܦ=pcp[1M1Z̑tVy]΃l8=d6^[[ivG׋Yx+r+iYB]1w]zsAB.6eZWԺ~Һ=tG6?6eE-evƻ;_Gk-d<將gG/H=x>xmkO8h.tcTۼf%|ݯ3Wi|sq||;Z gqnЍReF5@YG:]E/DI/櫬Tz[ S{Gai{Oℚ,D)|꒗F&B1_Y w4Tq6[9'D¥aqMZc?!)Qe$֨ʖH }@_ 5,aa`\,NGb9?i>\>!l`7ٛCE??_lu?.C 4{IK9 @c4&g{UL9e&d99"MLIY,Tb }X)HV-^n S"{}l;|5q6+%\L9VsT(sю%LIQm`Ï t-aSbVL]LsލtuKܕknYuE[^Rj&γd}AIKҀnoeO>c7۟ki~"*&"5H)Ld=$(ɒ\h#KxHNUT c)f9 4Q h eD zaDDH(9 yu0mH v#~sN{~.UWe3(%n~eBOʷ&;z8io{G%Rdd*_^nԥ&U糡 A~Кsܛ%a'egR8(2LrIqp[#%Ѥ?Y|>LU0fU&}d GLФ~p7}ُ'IA897ǗǑOz7O;[yx5^m%ߪc%^ZQ_Z=%6\:qJO`TPwmLֽYF/-gPbz:20%adR3g{+ս] Gß.u4ӛS?ԓjI=5u#Q\4vߏƱ\ Co{\W^ˋ/-S}Y){?]zOb9(X+[2KecYWztFY TݖΆ>))~7o凓oߝp'o}s״˔,)>Z'c槻w-t[]S%;t-ykrCW0iRmB@ɥi K&>Mo Խ ^ku4q/ğ6CpWdIIhY>A`5iKk}̹8`؎#=f¸T| gp 6}3P"=@nKYXkzϖ5\)OϘJJFH>Kxf1mP9PK&L6zijpt\;s2dq@;A] ɲKgdVrV2y\xL1Ey(E1 6lK@r>.i_z.":H#1ѐfNY@b Qo1Wmih -RnM/H!3DeH0.7ˆ0 XlzsbfDd^[qrK\adzO }v:q-_yN)W@lrcp! ) G8dټLfd@Ūӱұ1rvo r3fںLz3ls;q;0^>7* biʕRLWxVL)J.:?.pAaZild,L4gEdര8I#[  ݃i<_ e67ÌV{= ѩ-dMdzZkJb""&^v?B?>e,Vvn~m`<龄,=_pLɀDݝbFN'%$Qy.r'"R"[,“X2Ӹ3 a؂a@˾+v!jF2JI%˃e>X&cuD2ylR% vwҗH?NͪZV_be $OyϓKmo ȣdwaJы$ :sdDSUq[J>hK--xz:hz:Y Ee0#,y L+3 mɊ'i}kZ{~Ụ^ :{Wp(`hPEm3~, 5,B0En_>&_tIpII\08Ԓ\2'$MgoRtFj4, O=~]41ovc\Qzye‚kT \?K\b\2j jw{Z%ﭑYEmWL m~:ų'4e/"^/ӓ Vm-}p.܍p4=[#Z\3'ݬ ݀tؙۙ?}2-x?U &噏?iwo{941N7E<\wHͿ'5yrIHÎ{ aD4dָʊ>9r3O x5:ǍgZo?85<%kOz__Kë.;m,DZbXLk.^*b,P7rr k͚%%o.n(g޼255mk'Mޖc7Sc?9q4:$wh˅܁zMOs\h,x^-ҼMu1(hھL;~Ni]H4WfɆk^ot#]Lh@O,*􇋨4oyz_oIwS0Mԗ(}?hE0ﰾĽN7KE6vL-/EX_x¹;;:<$e,! N)DġNeNw{\ꠓNig~[H}H-ǵR '!~btH oKZ'0Fj\N*G2I 4'>vFWf`z֯w qm y }9Vyw &o|7Rq{?K\(m% ؟E\)%aDV]OQ4%x #=2WEॹ檈+$usUԶ3W/\Iɕ't%7sU5{v\fsg^RpOI`ޘ"ubHZ"%\@s푹*sEg%ۏ2WEZaw\) 4WFi B?{gFBjT* ;EH&7bQ_m+#[ZI"=!i͖hk؃mQMSyOwuW? j D-MW)⊵}n]~@uϋoBy=|֕Uu /n7W:Bsgϕ]^Ox޼lZ/A&c]B@>V6K(zu@"}޺zg-NP^ɼh,٫N^PrOɦȭzu~`A*] n-wgkrlͳhw?]pek}8>jiezW~8xx=0 1(Q{18ȫƩu8VmJXeFX0[V4w+Fz=֭*TNVWO+"W}+Z+QKag\ |G`\c?vZg+Quߌit+vw+zԲwST3NW6j3np%;ӨnAQkq%*ivW+o{•jͩvQ&ePTW+>z3ݽUs!{UFD%6W)r6c*Ĵ~k)M\۹j om5e7Ƅܚͮc#Bm=`َ^{eiBܠY jW!-T4h*F Dn(\ZyrQ%\=>)v}Yo\2WT=.UqСDOcq% (?u\Jf\ ^G G/=Tɻ+0Ipڞܕ> ݕts1x"GD=+̡\~AuJMWrjAq$r9V&v+Kݸ+Qq%*y){MW"#w%rWPkԏǕz u[-'7s.A%v7X!t?gD@Z_`[!}B옧^U Nzd(ONuQ;)*?O1F WD"wHɟ|뷃+L罆ԙb |l\;.Fʵǽ@?RvrbWH폫32}t#\fV+Qkxf\ LP6؎p|7L jݑ.ЏTifwubDU?Š5ݸ+Q&+Q9f\= (BG w+f긂J㌫ĕ:WLj{q{ӌLu+v\\]ZAj.OW?('ws ͢Մ`Ĉidu'me?/5' [0nytM[F2ʹ)o5.d&sN{kaJN{_r9RA-/v^p\TLWgr+~x\J;J9L`ࠎs\s5Ժ#Brjw3:Z[ Q7+Qi긂JlW9DJԆ *Wh5=*ZkT?\ j<=w7]p~^WhG11/RZ  M[]޾Nwŋgx&(_竟 1FPM>(Տǚ{" ]`@~tyyU]ʗ}#7pPcux~ݤ bqFe?eȳxkuJש\/{|q!~{ ܰ|2g}:{f+̓{l/֭Y!ߍ| .>S߱ d/('GQŬFޫr|-;f48xX]̓_wB0燋h6#;m~?6*K/tۖ퇫r%d r*slPBI. M`kr}Mgun!i_t_k}on#wdn}i//nnZ}D_hfy_ɥ͐tQ*H- X嘜5}v:)Qy5\P3GS&U5fB*Ue28N9WS5ŁUT;vuv9> 4X_j%n1&X,rLT֜V ڜbv2!rk5D *ckds-6hF$ ^i\c 37% bfEʧSuAg҃A 1Ȏ ўH\Kz@ގ L#mTbˌpvG))8#(<%;t< xiptqg(* 'Jr.|U_Ţ˳e ǚژ[їK,&ʲеYLf\s |m QE@P{ZvUٰ!g%w`UhS%5 E[ElJC2cCOV%Zep* JiRTL  G XG!hTPk@oJ'= i2 VX!rdS2:zbiu%fX57]\? e º3ߺՉC0ȂȄv8 !dԭ9`AwŜGA' sG`e&݌ P#.%JC%;szN`n d&_`bUڛj((Sѝ)G\`#)#tО%ŀwD"=dPiWm2 R6֭+U؉.g->))"F_R}Z f1r 9*V6`!ѿ)yA!pPBYsH Y&1M41# ChAvԙ_ r#/W6j{1#.UUQΘ ,jpF5%wN52  lZ .k$ SŚc5&ϛg|2di*90",'Xk|lD0Hc c.56h˰sP6DR\7D |qaP#̐5Ҙ n bf anKPv%)%ޱ", !+P(ppĒA=KH#L5^PD |d!bm B砐*ɾˀsmHW.#A8d `68`mX)RI9!f"e㈴FBTzW A`eP|wpo xDh+Y‚!ܠfs7,0p# ^8X` gu'R% ֚MN+?a9ڲМm,xJ <TY5\㣷 *,^7[$Lz2EynI C:qiu0qAs+ Nq]~-< }L;ϦJLƤYC][hL7h!$Ta3`*`( XYu.Ь! |0o8Ò 6v1&Z~753|n0)VM&9a5(]0CK6ƹ'A'%r) ]&X *A ' rc`; 3QA+d0>6* W'&\1[9iXu6(Э a PnQÃRŢ4Qk`b5TRXи yhڰ(RVЏ'#`sJXZ#enH >֠Xp|r\|fR,J1SD2a$ƀZ`5:?kr#5(}i2VuxsZ *Zx `93#àZ ^4=ЕAs!>?`S0F[}|ax>)d(0 4qԭckx * < ks :fC1IUQv("L0.cdԬkk%<:D \,9 S"* &puA/?5 H!ީq1cԱ> à6u(n](`nu':x^cDfCmA-:8g/_n^ay5Xk{pk&@˥XQ4L.dT!)L&]Uh \U@`0o,fUƨy6_@I|t|̅=b<XaXlxKZL]qa3]3? l3tn}ʱeQ]uB 7t2W +#[Lfr?H`p@(;R„%;q@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@}5uHI޶;ĕ`8 ˽t|ٳ,S8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "8 „+d4uJ@\i{=KkCq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@ϖR8`Bis@f1:Yr@JBsr "8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "8  z.z$?'W/jnfh}N w*%ϻ*͖' RZ<$lIj`K -ah{l 􎰥-}`{{HJ+06+CQW(S %g3TWYoH]?j+CQW {PJ#I]=Cuenl8o8 6'Z<ܡr!,0fGzw50vH2So٢4CkNs蚋jfG .G HSA`Ǫq?'U~4J ~Fj8mxev{5gc|Sv9GP>k+G@l .}zl9!p >+q9o3Q=~j`ƀ[.WiG \m57>?wq~(|1y/r2R'RԵ!Ge|tx]<`OCY?`'JyZCRW fPZi]]y}9Jܳ1!{.ںzBUWV~`zfԕxruc=ŮZa/}6;WU+T>ԩ ׷E\oars|H 홠|;<^d棛7|?^e}`Yat{s$掷t:;YYXGP.y#|2ccr uP6`IżNPqb٭ KƱv qL@Qvޖ*׋t_g~6%B>~Ba2%)#$ZV|zY9zlWy# ՒK[е|Wmq=agyGru$]p2F`!bTn5XP.+!,W?ʸfO, >t0}=|>Ӈ ƴyViak۷?]N`#?bpB/gY{s?U9%(8AM3tBίjyQV_y|1t |x蓾]s%7Y!%SM20w 3-$6}nй2Df`=W 4Sgr!%Ͼ"&*H"bVVȘam΂`)&GumLٛ\;'~Iyۧ9\`c I$C1R^& &[.&d,+c{4.J] bulVGܸ#0Sڔڬ|oO|]wA,}޷{9W<)F{#~NJXnYYT(/PT@ہ|;p)+yvOSE3nHpZ)~V %L/ؾc_w ;ZbPڸ|̍>i/kv[䤵SNwF,0fEbkax-wdr% Rr]!+08w[V+}1փͧMGj]ɍjò4_m~ݟمY n[2\C%][ZZK PEaE_GMRZ:kE~ `!n5n ;\:z"Հ Aq Ai[ ya5HrZoT޶gd[0NM̓\fRRe0UA{qc*xy~8VaTZ͟FɶG׭:Ϸ w ɤ 3c 3Zܓ%ju h4>TEpLs$1$LvQ'W=Ε_͏8.x<_㕗]o-tʥGgJa;+fDQL{e|x2O <-5IMcM(E#X1)R Oeߣ^He ( r-,L- O;S%x&N 8 |@089{7Ƿ K/`ɧӻdVwپ]"oR-^:s1e+>OE\aa~(x "Kn#R8ٵ0CxQK WJ(B1 6 J粒\Ko“ ){(! e8OJuT~=8f<Ows?F>^TXvٺ7vx%c2MǓտK墌^Ǘ*EToq^/ 'ZYLS MXK~[ik^FJNAb%7Ws[8G=! ;~Ujͱڃ*\y]$h7^#Uyn~fzv3wD[_vMwyW8_n~|g :+|KW=B( }6̻amO‡7Zm7lxvfcbDŽv#Էjlow/ I/\q'RGu3|""z7@H3P}Qڎ-yIOjоW8ll°24>|"B>OTtRY=a> kp:5w\sU{п*E.g|~ڻL,3M6 HSb0Ʃ lKh|..rg%Km)ᘋRrT\]mo#9r+| pmH `d-e dx)^,{-K#Scy3bmڶǏ~5H˒Oj%S#łɄ0,d,f+m5xʦ柍?br$) pJװufjѤZTuV٬O2̰s<vݡl4]7B\㍐]KwWLE5GtO)kQE6/١Y2 \vbIǮCm8w?%ȍob :(h-׉~Tu}zT6u/ pرi8uD3{ Sr_?C'FmߙMք䃦(4Y`UFaR0bBX3ۈ8L|S+\tdw`IcFVYk0/Wʩ e4!X%#KIC g@'s(G_V^9/yI:sاlqz^j=^qsMyM_r'*3ۀКOwTⲣq25 5PJu#/8&jMizv]3,H=;.!RubԐrʤA˄3J1v@V:TlgAԳJ+dA; }]ЖH ژ^7ֳfRzPEg uYAIK*~>@)$A"PELl@p)Xhߊ v_~QC." ّ1bp$J JGㅱuj5Z4x{ȲH(1ʈQX(dG hc8*w^3eJ {ތڮO mq&2*<K`}Q2h#L,cAI4A{yA̛AڭZ&w5Y+9LDMP 0c6$TB꤭bxhmIdYc 4s3 U;|c"ͤ:V qI7Y8Hҵ(=q(!䕂M4',^ [E^Q c:4ߝEꭒ E}Wь*<$|}*f#l![zlt3\NF[+XN..o]&ui׳[&ٲIJ8kQؤ@2׈]KE'/5y%ŗ_ZaXC}Y<"V>ΊBx4v~bf(FIm[_gƗkWSJܗ/?~-wZnxMm>wWo^ͪ%Q.gŲBc`b*x;Ț?YʀT)H)PHKZh颋pRCR(u@_IbM0J~I$1IL$%rNj87v#:xM+(\k]>قd]s|hԒ~!1^}%Z7WZ'י?)`Lש|[;g_x,@mOɏa_7?Kxy}[tZ:l(}hbexϱGV2X^px?zddi~?^0AΛ>A&y9-Z:"{m[:4~!f4d~~3fqŋZ\j;ӛdf|O~sr~zsj9}Lt|4bi]\/zu}yvz4+vN UnJ焻Xӿ XvTӓDhm:FɅe񊆺p{bZkcc|%:w9Ru ԱU|ݎ=Q7?/ @Eʁ) r(:9A1#Wު*Yb9yE/ZWU~lCGuďZp K-Zl9@;Ҋ j= t[]^?|.j* y#bJr]76S2ѢvR(ʠ YQ!Ei85Bšخ+)y DLWAU7|jJ]},U?$#`L/01>j4H%0*R"9-QxVKS"PJi2˘* I!3'Ձh}^p6k8חӲڛկ{~Sр9Ŭz^10Wbޠ?5`%3SW]gU80ǁ9*s%e`?^ m# 4f} ^}8~WcO0#gwL-W_q8(}OϢYzYk̨$]TRJM.y,RJrI Pyic5ZYH.J9V% +qlV%꺸0rP+Z=^ioP-ZƦ.&oZukf!gveய]0sׅVmeFuB Y;EwUp0 T9R})<XJZ%51œBX@ "Ddж| 2s2TDPk bQkmx/2Ó%yuVْX.@*V<=^Zn|QE59K$ 6QWZQLE3cؠs+,#Om"lV]*C !NJ+BT p`7ƒܗc/P5RYjnUPPf``F/xfТ:PP] ^f Ur7Aڿkj=QZAML &%0Y>$ Z$H$)àHmi"Qa.EeK*13l*g#be3PFbvl<]P]y :{wga'@l*ГڕVsєF;9VL5u.5-{xW(<҆1Q9H9f83=‘D&uAF(YTGWY&$wG)Gwkъ0{1dm4~u87臛x"~et|kZ/\G#Gοݰ8cQ_Fiz4k^~꯳+?fz6U`X>lnەőri'ㇳ0^NEXw'X'0OۨfUX+mȶBƭ~_QIg%~͘),Z??/;C<'q~sIkjHafWZPFy5_gDV|"Gۨ#,"^q:gH^U~7ޞbN_￁~/P/#`W@/ئ n%݋O]% M4ŻIW2+FQ1ϯ;9(! @ gr*ύLhCj^92U­p+K?0q8zL2RGq-VN 7+v 8qпԒDptҌol!t^6$[V A$8&%Id4ĀJ=wwF@sJpsy~(fv(:pp\o`ꃕz7vp)y}vhz\QPpDAȑ DYJeYޯ-]hgf:t]6|[B'.p_E!ϊtpUsn<_Oťex{rMwӛ&0E~aAmH׭›[hޜtHƔ׆\BW mtPJ)*6tJQUBeCWHWBrD*q.UBE*$ ]"]IU})V\|h-L'MVe[/JVR)o8;iQ[lȒ{khz?"X} R.w*_=_`&n̼U@qҘpt2VJh? !,MktE׬zz<+0s#훮6LW%r?tJZ1gn@WV=VJpR#J5 ]\d]*%tP6tutE4ֈ`kCWW`^JhiUByCWHWT3x sjCW .EuU^]%\4tutŴЬNtKZJpUm2M8]%4tutu]%{_\Uuh5<]%\5tut%0`*VW`BItCWHWK.+0ncFX(1\A|`*a4a Pv/_aխhլj:4[^^Ő9\(fK)8 MJ+oP˼o8_GҦϏl:i0D0fbN5t> 4qp*r NuEQ+$HXF$ }'W]$9ҒՈ4TgbO;+@:]igCW_ ]5#LXmX{bFpmnv_7C*tzhc,5f ]%\ׅ-}놮VI]%׆\BW bWԬ5+y}U+Y]*}UՓ#inKWXcYJp]fh5ms3C+NEFt3VJp9 ]%UJ!ҕ W`8 u+@KPcW %F ] ]IB՝eI 6m,U:Ik U77e ttmh:ψhB++*J 1$M )4;eLFx [sOŸ7Y-kjEmDF5jR[xy4 0h}+ZY) Gsi=ٖ`Y49\UJhug!(UCW_ ]5#śƈ{=OI f(UV 7tઇ~4LN8^gx| ,fU3:{~tri'p_y^N[%Y}j8 2& .S{n7 N1$kDJ3+YmfIy߉0Mt]q7?oLEHurp{?#T̛(9Dqƥ#ttqaV) "sg"1 ަ59^Ŭ nvv]Oy#4۳waζzفvҌ \޺$ܪJuCMPӲv>WP!FM,r~tjUgtz10óiܧclcl4.-rRⓄfg泄W5^!rZ]I>@Lҧ@8 vx83qnœw梓SP],{ݢ`F\Kv34vF͓&7_m"\Լ/^L[}pgtI2]{jxU Qgt}<=Wh:ht1l`"Z2|ԿFsK3hW]x r6Ͽ%<1塔eP>D]2a&o[(-q])h?vI^7d]E,Da2E,o!4F]LWmYHL蕒0>-}"Za2A'ȜPp+b ap(2 %JJiDSB1MqQ^mt4D{S(Ay̝vX-"ro0CJf/h\ \& ^^JᆘBZj0ŅEw}nFSTBfnO_J#>~_3Pwi*U|*/ec"1tǘ;ޙ*H=%F5 vTe=UDi4fWzu/Ǒ=VI=8lؔ6 qŠqÇk_. F,B0#BXYL?# q0+\c"U\!EY(U-Ėkȩbv7 r+lgv쵲2aDDaQzw5u[LRss=ϴ2V.]/ 'r\ޫjǓ(03](0Xvx sZdD)1lf%B_mnܘmf5߶sGVs :#X+PRѨyͅ1ǼٴM?Yr QmGs+:2,"1rV\$JCFܯRnPbaY3,➐4<ܜ jK;S`N:vZ 6gO҇hu%TfQWD]u%U}QWB)iua&#L:7 9+r;wn\6]t> #2GKe2͑IxaΔ)Q9SBsd%LIIфSBQIZ@bQ[Nfjx!Sr gGugS Šs9q;yf3xSJhqX̌Gg3>=/W_vtL̨S^9ז*\4HהDŅLc޵q,2ٳH}0qyZ)Ebouϐ)II#)8驪. R O a)g[ŝ2(eZ3$"﵌FMPHVHKD˸5rvkm>] &,vfsl;Ƴ;t8,n OE}&\/[Wݻ 'O)LF)i+&]^߫\LZ:?j]V7wotf%ͳ;%,knݽM{E`;|f~H_s7uϼ σij|Kǵ-z@G^^Ϛǟ5ݾOV뻅6rǴ}'c 6"\h?mHVy+=dK!sق4cdqD-#Y=3`?z^XDsS[w}=PK*#! 0W-4AE❔Q[KU{,œ}s_~;|gף,b;M`Gխ 1Zp3"ا|L}V HnȈb΂wDðqmG#Fv!tkTskF9hڿl0Mhu1ELl.,Bn9خOdj0TL |++eZEZ@li%cm0r:>Gt]{P+DKB[ߍv0dXRl ^"LXP:mmd&tbЗYH"dX (W$6|%S&aJ;f2Rʃp-a`5eL9;sm/P4񨺻]ym3Qya櫚MNݎ&>QkYLaљwȉ :XsD(8H !$3PLJ:0ąA[dbÅl|yGHFǤ"90ZwZI%HD0 AP$@2* n+R"T)ǥ]/rp+7882`#5KjVRSBqAGib**Œẖ$ORҁkii?GN =Nߺ^qe }}0SxIo;|%MQ"MZ 4>\)vj֜Ӣ*>>;/Ӟ8ȎC m ShIA)IGf<wu48 =]f@06`M@ZX'UXO grLdםA?*q`|)Cޝ:~J$(y>TPH^_*BqNBGnV0Guĵbdh։TptQU%IyZ,=Sݺ5~7^L'1>Tsb._~uߝWsKbnmA$g4<97B9FHy$׏t4 ia8ufyRC`ŴDcv/ HyC6Xbq3Is`\C1ERWjmUji]ur?Mz/cCk<πveX94Vdg]yʍ<+\yΓĐTOPl &cѰH7%2@ʴ]jY"Mᇰ$gU5VlϷ} r(n(nMϬ}RzTO=Zz)k-@9KDlhi)1H+hP>P?Vv! b%3x7GIPF5Q !É7C[+nGs׃faKs弒BfmܢcǸㅧ"BFP!< IIb`M01ҧk)Xz\pNiV#{Gۗ w\?M>kFBQje]tۃ}D \xQ(|aG(Ǝ)(Sy'߻=C!s=I;>”p"؈ #$p^[Jm8DtZQqDo[owj\PWGdTU Yz"JFc&=jނcRygϯ]Ɣ@tHjLT BcP^SG|@4 DmXdldl{YzS/mGoh;jV0QzJMpX,Qd ҂UZR )&;h\1EtH+ I,qHiNIZ[ɰkrLd+`OLvdmarəYGN}=!DC:3rퟕxgKV.tD舁]ԅdURH0HI6 Dlv }Y35y5\yeo ~ ս ~=3tG.֏8?~RBG%V"8L@.2eQ,R@f99scC:zM BO i}`Idк2kn[&y>-9" jmxN񺺔<<UϽn6E|3Jv`E3;^mW7 j0IM^ݔ_=\Ǫ@O`1\0݄A•" HTlV)^+zaUsx%LʀIo֠@`uE,UF89'V]]+/Jd1L>z]ٻx 8m>]֥ͮNj 1j!ARu=#T6GIÚ`戋yCA#D82(ŃFjBƒ 38cQkg=h߿[Mun('c_tpN "sT, VSb}d`o¨5?},ТAfwmu+PԶe׵ ioAdKx %nt Q}|jf*ZMT8㨢̡1JG'6W =EDtQyPWwnS+ cF)(TK#R q۟S^` Pgpb}~vs8On;oSIv~SLa:o?_]giϟ-XJ-L>b%AijOoI)^NGF:\$g?7^vAb;Noԇ"2U21.3=-~I&E)˺ob7at㫋0azk/|7?5zM *i}ќ_Hո4xzOZ+OЊQt v4^o_\Ur&aWR&O9^o3H.vfFҺ~R]<J\>_^:6 /Fd2׾M\Kn/ˊ{h?2Wj~KEw杗7yGΉLn@r.YM Oͩcw Ɠps/377<&&U &c8j%z%c^ws@JYޕR\Ǽ'`[;n"0JSs;ALOq(#*4נ"?{ة͸1=^_:i U/ǝdש׺Y`dOy}lM_O~>R52$8b`i}Ӕ)zHJ,2մ gj.%YT_śZGfw3eŎ{޼poVLՅ gu-hunr u3اGw^cP*nAZ6B c'dj{* ei^;} nY# N[sڠ".I{e(\tp;"ByLqoUm9φQJ5I^;Up +RF+Ȫ܏lνUAo^댟i˺"Ezq~(Wmf=#E؞٢+OkeiPC.%ΣsvIv*P6׀١7:-Nei6Т &Ǥm:ZRG7OhjbWP "B `_ UZ/d6gB&#twX2O4ƢeMڟS"y~I>FDAUsrʧ_-d=~,Eg.hgCI91v3trBȤ} 8ﳞ/zOփ`YJP1 E.2LDLFKޥLA,d֎9.@Is (UHч$bPDJJŘNcGf<彐==״}ĹPV苍J\J'5VTOZHv ƣA FtJjmu Ұ.ZֿvQƌ hAtԂNm'*HA526g72*Ͱf싅1҈˻ʨxKdӼ"zt~:=8=[\0..(LKEax<'eMjI` RVxYTR,fKdA^Cfڱ/bc<ܳ lGe٨~ܹxG 7F?>M4~\UYF_m;QKf7TVx btEu;ඇ?,8܉rp48a8{ɲ=R՘tjK`^aPA22' kn)(_uJ`cCۥRaBT@[+}Β>)P<t6Er绐, H p3qxCMƕRd6JVIbǓK,q9yn޷u޶;n"!+g]9(oH%:NPRĞ :j l`XQlk :j(aBȜJS. T$%SJ͙Z6k ᮶wn,C(w:K&tB3f(A|^+=5@&j 5E䓈h DB"$h=Z> T0@K\'ef 9zoTY4IG5>-O%xP}J&hZͨV~'V$% m,h-|) Z,::WZ|9b~61etbhYh)V_?aRs I&1W*ԨgbhmVDXyjLj"Im#LWJK֙T|Ȇ TI`z l03vn4vqEUfkuqrQ'۳['~(=(뀄1䉂M]ԟQ?}qb穎@P]Ez^g|G @\_:[֚enHq7*x /.zɌP,Y}$(Y?S*/N,/~Xw$,/dߣBW (~rWuF>wcqSGņ]ׯ{~1Ydu>v@Nٸl~_~_^}}ImQ3koOWٯ^NX'yf|b_{=23pDp+0GWU\WUZmWUJC#\}p5hGW,0*UvpU4f 0GWU`*>2=Vpdecm3[='7K|TK}NO&?nt7<[}ՉPd~~4ל~;);P!O|dtV_z$jZ Zci>h:LA#.aڐrKgE?쓅pj(cBUYzj1,,ok Z%yt3tͺ?MP޿}`Wo>e|W[iUKTk~,ޞʶ]J3@7~(cP)ݑdqXD$uʇ}tC7ϧgW~dtN[W`gaPG DRḀDgNY/ ( s* (kDIH_DDDT@֪Ra^SԺ+\Tz 2ː.JʮfЁɻУN! 2qDNF֥[;kL{>]{V>7N:/ԐmUkT)O!ve$0}x 3h=MRls Fh*?V%~y=>oݔhD_/1xvfr9N&w)6̓ɟ>=nz}CwُϷڀT2E`.JRH#0g#dO*KJ0jmO}NLxLCKynʽboqkE,-_϶-ZqrT/m> xg6J*dz{%QER =I\A(CхyrJ7waփ?1?\;XNEI`к":jSL|*C%E@fVrYf7W՝D}4:e^IDN,dߍ1[[ %#(12 mf/ےzoJeʸrn? cfc|ȓ#||g NyFHO}95) rl~mv}SϟNkcy9n[ d;2'bZ`@0U61bw1?- HKV3:@HH^gHZRxYViHvҪ[z6[gh@rѨ;:^uA`o4RP>1{GX>GJ^e^E Ȭ?tGYL޺"F(]$א|mCJE\D"caU5_+0ڃ/Ms G#4l&6y:PIgهB-zYqPxFYO2y?qݷN^ۿtWZMeLZ(m'Th O=1H.e61{{{_ަRl|#4rv97X O~;7Ohth8>2Wg/,:?NgM&Y=io7e/}k>CA>X,:A;Jdgmˉ$\rH gs8-J-A 5(V8d>h5j}k5޴`#X+PRѨف c(+U\  JqTQ( Ȣ` p.\$JGƮ.s}nYuJ'^N3[d&Xr }l'hOj,\D}"=T: `M*%NV}OTT:c*% ^aWEw\. #=~ tʔ#hquAk&rKh"(܋&RG4TpPTb !ʡ0Px :M1`\RͱV $:Uh,"xEx<鿩:<V3|i泳͊>5:ÖuKc1Rŕɠ:xQJgwKNcF1ZleF2:ڀmT Cu]f3pSr>ѱOFYl+xhN&$^Sd`{ C1T(mc4N]]J-.b Ƀxj(@ )sǨPi¯Tj'N a)g;;e&#QHYrnaj3j@&ye4zl"hnDt,wvΎc9K'9 "ﰟJŢ{Ϭ;ͮt]zC cqK΋, حՃsI@-]ק]Ƴgkmbx%ȍMS\6$m:ZM b瑱u Sj{XtK65An^!,eYcnZ+=/o^ۡ畖!)5O7usAح.zuKǵy7kP||9o񏝹rݿC'}dZA7 \Kq7j^x'}2R2Ҿcgt#͘.%dQ@/d%asL8hx/w"9!Ckﷂ{=PK*#! PW-4AE❔Q[KU{,œ}?{_o;~w)|[ky4YE|3ǫ_.YJJMJ@ꠂZ:Y-13磏P}$Ls!u ^Ku1FGV]<. S5S"7kutׇ]5FRwD^XV>+e@d-d")J8`6z3-;wZQ vBW{oS-[)K/P &,uQ(c62Ltb/+- $UaʤHXGaObWh#+_ksRvutiң]{ѫmi΢]@N#b5Ga\ %фgVmԁ!. Bsp+_,c\;"E RrHd 3@kei%1@!Ayvp/)=fJU0Tqi;8%b N hE3ĔP\Pc$A8HENBpkqR(H ZVᾔp~]2\߇YWgaVIIcX78!45=nL14 :i1seOr> n%OBLkÅ$q}D9Xpapk)gzdGX" 2%9WRP jґNù\MgϏ@(XA{! H ˁ ׃+c p,dߝfi 0%1'id|)mNd&[iGn>e.-_ !Ÿ #S+2rqYNm9qG.Ũ47IqiNujLk1~..g&H"^>/?>}ug/~x /3A52~}h»k04]κ^]ƕ]Ne%|ݹ,Dt1]v!_D+AtRcb~ H6'#y\HegJS= c iPθ׵+95|(bR%c6(SZyxCɡ׼:xǹb H4DJAN,DhrҝU|`pIWrvsE&C'V-u\dw*޿=*~xH:bPq_y<ׁ0d"pQb@ޗ W*,6#Ra3Mwn۫ɼn\}-0d&w[Rv=5^.V$pLNz&Q YYREaDr0‘D9,Pu6`|(l|K>}98Դ|t|?"F ?5C$?t=#T6GIÚ`戋yC؁# GH\10׉/8*WS $+QXF# ҠC>qG9nϭ$yw[Htw}ow@ .!J[B{f<}8(J1%8ЖGucq,<У\BCh2;zS<}RqeP"H) %CfqƢB8 2 Hz<]eq$sc9=cAo:8A x:EF#0AjJ M5ћwFzODhB]Iԍ5`sw ;`≀FWu!Oy%"ŀgU91FZ&{ S."*ll66(0fbN,0"9 謣0AukooN{/SL,GsW7_s,LBpt>믋`T %gLbYDIPZZIpo瓄-#='g?<}8.KR:U4#)pEYOOҧ}p"@{; ^?6J-Iw+:rCVx>'iGP膣" eJ4 ?g~13?/[ @/6vv[2FO5Ex\Fpg02ޚIk 9hA܌_Ngi/Ojz|4yu_6rG_7Goarի;S.ߗΈ7`懶 aKyCŦ>-|ƀk(z;R AϧW덫v462yGW\~fb9U/ȪMumZ%#M421+6H+ [lCrV8jѠ.V*ud2NyfD?pc}t\,|UݽGptN@uP5YL7Ljџe?l4's\+~m;iIͪ${|[7cM<}TuΠߍx 2Ф\,scצNVq C"Qa{6s"m@y H*4HQꔡyYTT0{+ *jG^x}g|12"10ϹJM#%/0=.`Rψ^w`z0.**WgqD_kkp-Rpt(= -ye"C⍇J!9ɥ|N:)T7;@hrd3TT\,T_cgѰf+O;W2Jy%x&kY\ coL|,̖=et"ǛN<Be Udl~1+^Y@U^qe"O\FrycLidDrX4_X3W(cKG64˫&2p?)ٻ6$W#_['xK|_rAZˤ߯z"EjZC8r駪M˸HttO_hv#FӼ?ͫ__4)hSNDN.d~,/ij@KZ-SYڝ6̌n%s_O+՛ y5WjM'c $%Gj=oz\*;6W]=8fGFg:JYxc>y FȪeeiסsJc=X"gSes"\uHnW? 4e:ȋan~*-\9"a,Λӏ_?)|v5\uh09=y&ʩet|vڐ66i59O6(g?N#VV߽Xܩ/ug2:kjge'B[">%`І kIȴn3̘A}YYΪl~E-z*^@phhVMuyt0Ywd>SXPYeh5Mע3Ya%֐sZfrMޑQ>xgh -JZxZըV\r:3u( L:+mџ,z2*X\df-zpw:ѫStbS+|@2̀Y]QW!3xINry$srIC R:"fY0Gk{HD$R-eh}1j*7<Ѐ +-x1;4F&S*DtY$(5sYF˸Vڽ4E&577H'M4^YK8miq4>겾nsjv00XCu2Xqkh gYrwk(mf)ls5 /r@ciA]#f~rN?\wDnHjR=%ӧsթ^ uYzZǁ)gZ:cˋVn8}P]7Z(ZWʡ~X²V(PfIR:% Yp:"+@oLe0`1)(BJ傤Gk3p!k KF&s킧$n?A>5}@вG|G׆q[|S돒ZOH1zSGUb`vi]w5zVOpC)hѧ]Q::MYUT=(z:LnD)wZ#8$@3b!jTgcBρg4u3MJ.%@M)z༷ \1Y&Wfٲ˝ݧ#jĨ񝵓O& . yNF r RTqr3ځ5 sm Y& 6$Q/+E@ &YؓU,@1T6jlTiGDK?j.'եTkdjxJfM_woxk0}~}?+=G- rfQ Gw2E~zō3\xV( pUQ1fYIЈ1K.zRlTMLܒ\HFjlQW4cW,Xیל֙^v7'.?r~_hŴWY͓ |`H.esApa/ͽ\R(u~#W6@8{. &Rؔ ^C4FɷchRf`ZlFl?%/]M;vD^5P[#j %s,B,͂Hߣx31vdz OredxkDx`Y=6.9G{E̐yA&CX Z$Aq$YEȩN>RMx9ޮ{ﲖ~슈2"#"NIEDN$ȸ=-Gmd4Di.gkmֆhL8#I'͍$; I 0I EeD&fD&T{q;3>:iɮ(+8< GZy95wWx{x#7(qKJt{4;OհWeEw0[+7'%M~B7;oɿnx_Ni,¶1IFÞͶ0kޥVsF,k-HϷg!|AӐ^At)k:Nǽ=_یGݷu2ۏ;m^u.Ҹ*[ɓ8 /6պQ˨pQ[F4~}Ѽ{-SU*ifUMԟE/F懩~}?0.ZٙnsϚ3F~4i^BpvY&."|fKwX@_i&T4LJk㇀y7 Om& :ۦ[|9Pݧ5j"Wb뻇.+}6HؖI_~RvaD+_btX2jv0iֿ!?9<䕭^Rq}vyN=*R클슛 3FBPH!l&(K0vKܼ͙}6OF_̕fFzc(6EV5y8ݥF@^eYryHiEJ#,Z{(rܬ)ݡfSNj"Oz_Nj&_ޮygk06VJ[pV f1ӮyƊ؆7)mIHr ?aAwz}½ Ǔ0>[ē̒^XϒF=I YlQDT*UؗRJkv2K3w5KSb9kK_p8i; wK#jd_+e:t^w/.@ޑv<]QYKaQWx+-N '<<_ݽHhD@qaHTS1K)||;ׯ$v=*>ͬԔ]M(T\]iZ|+7jh:g-ٝI[B9 Ų)\DKO~]Ddɽ@ON)UAEE1k"!Q6J+uH4"XW[Wek]}Et%7lzA NWf\ΎKWۡ8tS;.JnAW]kT UiS**THWD i SCW.MVSR^ ]Q)B UhO([gEGB[9t8-=y*uJ ubWPpo ]EOڋ*HW X 3pc]ET:]ELtp)In+Ʉ=XDj݆--T}TOz_mVG"(;O?lN^ dmbd]'4PDQǽBs)"/8ı9X!uLhp'M-G톈nK4&U,3EJhGeDԆM:s~A vlFZu$>1JmAW]cmD4"\*BWSoٟ"J[ztEQH4"`CWPMVS+@n% %ap)m ]ZN(I Dbj j ]E~Vh%VNW%E-]@0T&9WU:]EtJPN`6Ҭ1tp M+@{ͻ[dZtJ)f ={Ht>u뵷(7CtPch>tDKO~!"iZH];']ajz'֡mʥn[Ԇ[ʂڂvkЉl%5˿mjhV۹m5lG1*HO0h",hzd&m%z4)DWLE1@*dϯM;/Xt`$};mt7l[Еnjצǜ DWXJp5k ]'tQϖf3m6w?䯐 ߼Jʟc}B_%&~^nxR-z~pNk;i 1U,˾t5 oj+@P˨Ŗm~e/i_P.RlJ>5HT9kLF"9Ex&Yz Q;d%94߫F6gp> Tk?k˹W9;;vouߗ_GfA4s)*ij]@!c?S IcA1PN"iT ? 5rrh?_"2,+ ǝ_"!_o-IU&,߭a|{[ߡncf?&UX)RV[mt$q?]Q l}P9Fa8[rZp*_۸p?cM\nC7JoX0$ggD!c[X1c yeiɬhnD,p^vdzc͔&M@!fi󲼃-"<1wqaЛɣ=)*kU\VrѰ?y`-zZ!rXLJvX[mbȄ*›&LU:1C)L"tdaڒEGTeVRHB1u:MxμxDhf3,BR3m\C{O"gk)ևp2hqBc:J=g<;y/qѿ oǏ%R _x"Bt9;P8Lzr ƹft2~ %O_4X`\e5MU}]Ƅ=wD[9ѳ{^4b 4L#@A}U^ 9SR3_T}ޟv #oDKE_Ad2*xDv=-Q5FE4ui7anQzt~)4deia܀g%,KO2Ķ*`1OSR>`#͙7_{EA!L{)c._ޑf@N%$eQP/dӌK,瞙zOSt;TcO{F'Tf(*]T@.,#IiKt`U:>NCQg,d+ɨ2"2_n FрG!eL{[x"TcBɐ!}܏I `pCX,-,xGIf6 㝸X̪ :s5~\K􈗚߬qM0h6ht}ў[.,G]7hX,ecEJ5eDEXK^1!t C4.¸5ja52aZe4e")R8ăza' :㧼ddW|Ot"tN;aY=TtjMu8ņ`QT "C K]&N[1L318VZf8!B#aL)PqZFe)A@`[/5μF؅> `uT ~Iߥ0@OX3?;L,U].~i%J&a+N,p1GjAos"$dFJ ,V LxܕA9!Å*6sg؆D` s)1dNfRafX4N+ t (=@2x$mK0˷R8SqXYYI}UtF+ z4Q[KZypj \Xgy̠;UFNug'"vLϧ /Z1D]D81~"Ig:ܤ)pebx *aUxO85ia9t?jR1VP}w1;J"jP'}tj?;1GSMF$2E[L^B^wr5/ !Ÿ (d9\R:\+FE߯a#/fTdL4y eo' o=0 ̕ϳAn^m-8HǷ&*wPrbsb|uNW!kaͦ,K~Ga>Wtȳ՟up/i ?T֐~}hH36qLwJ ò3^E_IEE,p3.z1HlKZ?N1 q[n9#mZ RY( x2nlAhjt2ShO-*m Ӊ8,pg~<7Vm`gfW.&׮_<@8~6:ItJ<5mʇ5Kľ*x`aݳgӟ,y: nSdWvCJAa7,Jjش֘N+~h(mMTQɓTK]y5=)2(pvP)  ߂51jqwZĽKF{޻dUO54b%3x ko<&A3q-VN r!8q<9;ODp|eEwNZw깷v#BFP!< II$2ab@e׸Tyg =.8w#λcYo_Cߝ4ޚ$ϷsE~Β鬰o[H AOqQshH%S6/aN(ESfM-X) ,kA˫jy5Z+`3V ,H#6"Mr#g8HA #9&r_#IŸn]-8`% 6l TwWیe 8,ɢHS-ǀmi,zzI: eQ gJ$VTT*ХaԌU:$ i-3g;(_lS3 WsOF@Y/^X9-+W՛?[TR&Ijp71{.XEYj8A,a΂^%΂JQ)Hd ValJǦ d?n5V;?VD|{BB嚝zZ7&7,{(H {rY_kcA)2$%@g]>6ȶ%B(HB@ D@RPJivDsIKp3rKvg0ש)Jf=X&4zb9d_1xYZ僻Mnި)H8;4>0DSCE8*%%L2kZ5{9ʀKP*RT0lqlP5$E Lk_OjK dˇhLY1 L8` Vl7$e8=vmK/.YRA;MKe GV\by,W$NF#Cj ZPBv-}Q%{աSe F6|B奄()@NJ&dPA'&j$tqBG"VVitkA־pMλ1\ tZx@ß*J.2R¡b*/pS-ݗ\Y}w:k_h=Z2t:c"vqPjHj9ʉm$xLM̳* Vy72ۚaB 4yg]]9۟GAyaV||M7doMRuRqpəLsD;ʊ!1uDƐ˹CR!X X#pp?/V-|t+Oh~M6!=v18cEQ愳Y&"`P:JU*0DEMD6sZ>NZ=f}llV߫k>r_çPAvF8U@θCAIT*]Df@;A+6a{ CŅdEхyg}78ɼU gMex3?\mS'r]?XmE2Å١_4Al׺/$zOF?⳺(<;ZVprG2j|mmYew4-%M&,{r)<]ӝrtZs)0+~}ʦ{~4fG,}7|w~l<9^0gG҅'2>]8:/t~̦l:Ld: S:hj<7YTiwN~=M3`N2o?ƳKJDI2EHirR|wGbtZZQgB4ѿ^Ҝ~}Sc1[yLҳ֠.(;Ѩ.ա$H^3 >,q;}5}]6 ~ۯ/eFޔ;~޽l0Rbwچb.*-CtgYdZҧ7+]1Gl=Y#br4Ȧ6͹8+TFt` 蠀@$Q6lrCI}qXYEJ~~YE|smj =.L**ekGA鄨n5P)N:B]bi&W90݂NfDdžÛƥp9~ 3})LefƒǪ;T?' SMmx;վSt_yY֋' ų)L~oPVm%0ITgTJ"PVeU+sp#1z6I|Tw΁A( lkElY22AFmS=9[ QF1$[9"u1h 6:nm6#gǢ!b{6io/3:1e!v,0y}Y3*종B;4]g Jn-d(K@ l?[YS;Ld&Wj#]NԺOV+r *,y=OIxd_/7R<9MǛduvy?Z]~ރ^Y{C<)*?azB b %hefkOz]e?VzREAYlPmU_r.1 JrR1PHJ1);4JiYhZ#c3rvVi8c_, ,Lӳjos.oۇ,.ҪÓ雓3GbB&ޢl:TA.fG2JPy4El ~ ìQiUau&: L \ITDt/͸cOԾ8jP{`7,j" :Yra;L HT(>4>ZuGѲe( {Ig$S 96\ljzzz3raԯ,*0s+ecD"x#K9Jy;ϵ! #Y4TX.%Zh6@ a#k y7Ό313Y95abOZٝ4Izi3r#b_fqq?Xg3.uc\T.xǣaeYKC];#BΤEՏPoȃv$FxnŻCfܱ/xh;-EQB^Gn>cLwH)Q;!TU 2otӍn95#;FvFe(SJ`pɉ;K|?"0N:hٜ;ԆؽҠNpb OYʱX!hcQhk6֔HPl6T*\LڂK$,Y!jme&)xw1{&<[1_Mj4et!EtlY{|>Ag3mFB;@ƈX|QIdv\"/9CH<*:9+ 7-֞K$, 2`!PK :E3*Q81  Tʚ^*QJ.]z ~BsY-]>OtR,Iy'jdA'JX[/Q)kbծg]ߣg62Cd!Ћ޻rW|S<0jw\9oGdM=tVQGh[]s .~:+(|{z=;'ՏlN#O|Q{ܰ{*)EO)ҷPfYr+Kbǯ-c{B]|~#a|: ]-^ۼv..cyn*u߈m=?Rg޹͏Vw~O@owG~|XߎҪp߱vTFy~^ɈqM}?磕Gn[7:)6 NC6:#JΧQa1I􅃛`/Z:E)kH t*jD/ D@:::E>ԶNN>xwc$:Jt@ZSCA)vR5ѠZZ'U7#gi] D qa7T.D^n9ή~?**…BJ,ɦkAzDMUguU䶓lH9{Z0&re+ŨMLII-3$UH.&RXW(!IMtPŨbU.%UXo (@HZ`5+׿J-;^|4nW*>}'lgK;pۓpbف<lف(sUa֦jĢ2f2*h^vtUSߋz޳۰APs6UkbI9ŀ R46iaV;Rk2Oy1OɃ k.8*|s^(kgj,Ҫ~V=t<4qPOfiѶbطWC&nMψhuJĺƢ좕`dm0wfDezE u6—ݣR&O\.\Y(I Kz.$-[8$["c:cu*dw!qU9o%ZRHr k6-r&:?2WLV{ver2+)JxJpءjje%CrIf\?PX㑵7/\Q "ŧe]/Wޮ|y.&m> !S{ٜ7G!UYUS#oJ̱ϩP.mլjރҙ{zhлQ|DR*.Gqu~mĀZAfL7%XS!QU,KHu՗jm|*"*4 lsL/>$U(Ʋ-6d1Z0f )xX򱌂U.I{V+ BAtTX5#HR]J q[0Q1 ^& E21 )K!!(<4%NWh A^AkOu\q3$ʆÌD)RAVO$Xty:7p, jʊi ,q)Y'eеo1%iJLW𲭜YA0Qim#RR P69 !zX P Ze#@Pl<(OԏlX&0V@FFiBE+l=J2%XY#T*u Jໜ2b $_cX&"Ֆ! Nm%%iOb3.9G e4 aAH `-e2] Ri}\6*8ϜI[S2 : > : X*?%( 0*8dR0 3GLŇ,QuU r4_ J-7HSѝ)E#(#ƍEVS4iϒ o"< eo_HtT`[fW>!xj{RQA}]sv"ITk0 !ѿ(`5BΣpBY3%Y(k@Hdy{PF }F֊XqZ{`4q\ dK NRM}"y&H!t_2unm!q"p /} ,0PFUkmȲ0ۦz?L &ɗbLc{n([TDETխSVRmDYdnm D v2"}7X!Lvyw/)~_~0N6HևK uZ}4 ]{ #0= ԥvws 7@%|./СLG֝@R@"Pa22;"<%l qD2Z]& A1#,0A%DEgHjwXC-D{PhvCJ"A9R"̵C c~7X6 sPH(]$?ߌVE:z#;RMr;[NO6m_/\]cXK7epʶ~uv9z/l90޶Ijhnj2?3rl/Txr@VO|@kd1/Z}@@ѶU[/-_R1k…SVrJOb'?Ob'?Ob'?Ob'?Ob'?ObTfar2iQMY>_~(f :ty8jb,ڷYwWw­18Tn<B|3Z㷵j5mkGRCAEzB" 5 ⻮hBHx0>" =`m 2+ǐq b> }@b> }@b> }@b> }@b> }@b> HC`n(f?@b}@@)d39}@b> }@b> }@b> }@b> }@b> }@bzoկ?h)_\n/_/ iۼX5cA%ٖ8rlKU1%uؖR:-m铦C,kW ]Ѽo5U4Q(tЕgӫaEj/Qg=祫ZGVV+*8rs}J(.i_PчhfT, @VK%4|C}qju׳b9iW5gmZ]z ngZ|otuwơrYClc-LzuU_;W{lt_1{+:ժh譩TmWxDFi+xq`9qOUb>m Z]lmL{lWGFc;Q!+b]E/M]n1q-!Z=rʶk j*ӈXն ]U6-|IZtZoBN^]6 `רR:7t"2]!]))gW eJ+U^ 8VWHWZjuAtE)/m"0]!]+e)tEhv<`z2` +ֺ"VBWv ; %=tJ+udБ( ]LWOBWV "'7R h"R2]#]i0%zt= FX%;%{R.6A5di.q^,vĭ飌GzNW2*#+c "uEp)ړA'+kB%]苡++BPjj?FrVjY"NBWvO eduut [˘树#!pÉ^I#٣VKSM`i})4Mh iGɪ(i&Otҗ OjI =%C ? 5y&e:*Pmc<-{zW^U Nq$Џ8?+__?O۷ ]N7:QĴ0}x4] /g`N]< ~D >%@24VG͗{Һnِb~<]_7HVpK^Q?އu-6bvMn#4pemwzgND8kquV)l3MbB&F_Mcv"dthksRMyz0/dg 4BߟMf/g魍Y5z5i,9Ry8F52٨q/g3 yf6?n~Y n֫cŪ[c&xПVE[}lያ#ͧon[<\]88] Oz65U >:J GѫZlU4ɻ_deP! ^p3[F{Ew4j?⊣3tYhEǣܤ锔>ߌz<'WפDB[~<`AҵGq%ƣ:[.S9S\/$oXЅ޵x8ht~Z$l-/?;(a_R{-sRY]P)c%F}~ׁVz דi6M3J:qMU4 J@fNJw]YWݦ~F,l^b"\?in{υa2lݖi.&<;ӸP+ku'AHKZVӝRHDlnʫGj% A j 헥mb#C]'kmN&FLr*fZ|0oY+^EMmOĔLLmDK]\7]񹷢Z=M[Q__ey_&UyCIrgPw_*CUc-臌TDͭVGH8JABl1'Yõ[^^ R:ҫ|lϤ)/#[R3_3mD6ٕ>^]ԩ/>~/VUF:;gthv unS3ĿQ*:jSЭ Y8ܔ:VcЦIcv yh=v|B.~(ٻ6kW-xT" g20%@&pŘ"9$mn5w-rKd{뜪$BP cd=XmQvw˔0i%IGpu[<|+9e4r9:ph; s$ #S czYKB'A5+|XH JZLV޳Pem Z'ؑG'5'q/)sc߫/ţ$$N3AZWYhmA"% |b׺ճ#GNXgg򅚌i>d!q[ٿ7y2v]B̵U98kp>;,kh,!јEUL9e&oU9EY;*'<'aKh[0H6p0E)2(Q,ĭugPfJȱZ@tl`}(ܵ > @KؔI7}˶LuԌ0q#qW6]X8jŶ9 ZyeP*0k*4";l  AYp՗#ߚj8d!! 2f DBV\tSWAGHU2ULLY 4qJ )5B{K^ s,1ik9X@ѲFyIgIX("5\-?Rlwh[՗W]`l7鬪<jaǠ3jf"x0H"ƌ "e gGŴGh+7:";m*4hk&Ea kQY'6[(bJG,$E2YIts^P 8\BA'[z=S٫fL%|qHl!; IKnen?ty"*&"H)NɂK&2ɔ^EYr$0]Ajr3(i6Z'5"PBLk[;l w~"E=YA9 0Js,i``E/'U_xb~)A&kpuR4?0V1~x/iɈWs#x'Q|Fur /U1;{mD`:0p6>Á-w!(ZU@M$~Z7[w7}Eh˲p8dpj0~A´QCDD|Bk[b)B,T\ֿ=zqiXm>qEN.0M(5E޼x9ZO^<]-x >aݪ3?̃JۍÑp4fV[L=jr8#FSK%wtԌhlFlfs$)N8fbGDOm;D Zm+\WҴRFJÚ_F:t )(:g7U5ǕfU率φi,w>.k]2wHr0 Dlhr UԽ秽Z7b8+?ߗ._^\>%_|?s&t1cE]:%sI?jC@Q[^(Н ֑ @X]߽~q~}"< [i1x0HPw[|-0k%IgWW)Y)2 KשR_6|srȍG־W$& Qq @g9w 0!\҉ۓiβK/˕r^{6{ _?KTϛ%#IVdTZ6t4ҥ1:i'zdwM|7<7 ^9[(v9qˇ7KxVVϴ N*5XMTJ U XLYPc5_jcج>ޑ)Ľ Z0:6RrJN6U`$BJ QzIJf}Vd"x&X\"K560j;m-tJ3CsՔVxVZf؝ã|_=b1A ?7 x7{ZKvX#[/" w۸'1DŽ,H:f.3E zn3fO(=h֊(hǂqrQ\.2T$8h)(%kn*LM+&{_XJ$vщO]b횡1Up39(eP㻘=b(Yb(!W2d!y8MEϴ5B!JzL6d՞+s z΍y. -ΕH")G-Kpk/;q)ɱg hY5OLX2kWT.̨f֨oZ/Gl2a4w *:n~$n.4e9srpChMeNr HY܉f'2l fMpFGlqyNH<ڙQ 'AeeB  DWY.v/yX Q|f-֮%8XOT?tii:shL}VweļhЏN4<$Aʖ+1ѐfЁ^Y&Ju QybO!H')#)]HɓB X@ "dByZ!C9# s "G@:LDں 'lqdsޢ'愳 { NBШ{թffe̹rCRH׮K lˎ_o;>1NP,jv ,A;L#=guDfwR8% OHwΛ#};%$p.TbQsoǢ dȣdE9IRDf2")9UH$]|h2JiT=4ylVڢs=33HU`ZfhKiEqP[:8?xRϹh=aJB ܃႙)+ZPrN[3<%uZ4 PоN}(j.!ɖRJЧU _HdY*9h6WII2o&̤\JǻS;.EfX`mS2b tS wŞb#/myD%j[:_uk=K:wEuOɰd_Du8ucYu5v:M_\]P^]On&arYtspEOC&_ٛiӱ.Lps<ҐZ?~)|JW"tۅ&~IhaeL4w:cԴ\b"^+&nhHMȞ74S@=&*1ł5V$~%mѿ?~|U. k?4;zsA'^b^'*f3t4SQ3._n$|.r6 r8k7}? ܡ响 ?pufe߇J 1u1=] }zݳ+ <. C3\dyA=ܠbtNLyW,Vt&&Rt6,ֲSw)fe11캺xwV(&G-60|n*Ƭ^+D2J+0!;\*B4Uۧ݊*ۋ*V!<". dNJE:Yf lC2ڈw1ҫBzEC~L՞uv9lYg/&D}|Y@ٹ@]\b\zҐM}ljD-|MyofeYeR\2hz³g:^'vӿfz_Ӈ<8WR?Oo~p->Bk9"u?3JI/ZdIBVU.=k;p/զ>;%?nnKA1}r-ұiԻϝMtf/yZ[ӱ%0 Gf~1ۤ]|~۷ 8:jwnn F/_8 ͼvkhFtm{jHG&ן7'/_چw*emttk)id6W(pSb0ņ\ +r]jˮ=v]U{]ghwXԯY ѯud5rhk VzpⳝOjXzNOp!Jc[rZxfϥs֗cK`Ou,&h:O!j45&)-e -cDdF$4{6 U WE1V"5Nʊ+!Gvʤ2Mzi  +C4rΞE9.zcl*4&'ch5jzڞ4xгt& K) A&0Ȝ+N^'0LJgU̱Rh8 r m J=ML,2 3dPk$sVVֳjp6ԳV "A->:*App"!wyt0Y|$ ƁUbT 7#˿4d5%\w$tT.@p)$wBznKzPzLjVOVb9FP(uVڂ4|2*X\df-{p7GTP5f&&Ӵu6ʓz̙4w+H2G5^5H0 }ж}VDujJN* +WMD:z$&W Wm4V|qd kcR?bA=kSgZvaOqvZ&.K|y!:iOYV(!QP CI`a>cLyT6Y 1trբسY\6o{Bd M6^yd IColߡ&oiưkkHny[e i")HSFkd:,O (9 q$F־ܶxݺ|ZFןߥ?WjQ4*7h]n21vht[ޟ ll^ߏ'k kUJveVeV=R|~N{SIe)tg?Jrxsx2(Kp)+LTXof֐|LL=}8SlzqcR[D~5-A<~͕F3yDqqTdgP2Fc޳Tc\zKVMLk8WDRǙR IƮ\h+s!\xf672^3;4?84 7/vB./'//f -ahEX*:'#Eo9 ԁN3.`)؈.resݭ5gEBBRkFh2v m`XC,Q 'bvIǎ-X]l`}dE("y:fØMvHFJr{#&ȀjФB쳵]HJn6\Ftg $470'-Hr'!ʌX %jO8]PZg5)ٕee^/`7diR%kryF1&, dzfd osLx^LZұ+Be>4ѧMΑ6xCgamApC㑢5Go`ɎxK C 8 C[  xסZhd=+l ]B Z`]RNVe+z݋ Z]RN63U7tEpUAˡtUP 9 & |ŠZtU ZygEi:ARF(-zDW? R :A*`A?tU|Aj[WRN~ZNR_Fw#}ܴ?磗+m / ЌԹLџ_~7N6/ipQ /u_JcNWR:E2ưt `d+k ]NW$J窓  3;b\sgB&m)Y4ؽLcn/´({ܼU{BiȇtN6ezIiX1&R5oX4.J;#嘻o7`U,"[pv4iϋ?E}A@W ]Cg[%o]mX{~;ǎ(=V(5X&܂p] Qx|ZtUJ*hAw ʮIT]Gɻ\*hUAi@W'HWRh.d] \d}v޺" tutv4V3X ]Zje t$t$7 \}+B+u<IJKiO;0Uk{c]T#y몠b+#a\p=⌾p֢F\ckdGSa뾰tA+EYC>6:Jxq%S=ö^?Hozm#X]Ȓ2Ad3ls1%5{` B(peo Z{4֝NѣEl% \MMAkX骠D37CWvág޵Ƒc׿"2d^RLCdb:XVVg=-kQCꮮ:<փ[GOz\C+OC6ttخ'D `b pR芼l6ȋ1fKWϐWKRWp\'_`=7JUWϑ(T }\ ] a1jn (7n9-]}b_Z/+K+ *K+O |9]8vKWϐuAt9iCWĥВ9ҕZ5 +CWnK+zVZ(n9ոlņ.[!볬Angt{j[t{gY#!&`])7atnncsWb&Zk7J'[ztlw˙D\os~?5Pcc}Pڀօ)y>%kMtu햮d[]sp-Z1NWes+Nzε /ܰSG6t c_8p\ ] "KV7 -]=Gbļ `1뉗BW@+f@㖮!]H- h).b hP-]=CȞDWՀt)t5ƍWW@)f#] uyq{uUә^^E퍫M7nu.bv1oYgк,/c57|66{К߸yԿ&}Ჳvm u=sM=`EՐJ*N`v"5"^YIk:*K/ݸ/a`Ȇ/͞|чW-};ؗ @Nvxj燓t?~_~[g;y78ŁۻRa:-9:uj~ RGuֽdi)1[9n^>u=_z1ذˋؑY$€kR&Z7}asD ˹p. Fˋt+z`Ӿj-7]>Z%o]tE[zlד VdAt´`i)tj̦@y2-]=d.I]09p[ ] !l:] v֖ ]9,`p>[t7׳+&/~IfՀK~)t5n@ɺgHW]] /\ K+%7Jt J,,  KEEWե@oJ1[ztS$38]vjztB!7^ө!:#*WkFEM-V D7bnRfk%]n`Ypb.hln(7jnZ!:c/` '~x@[ztպ$*K7mW2byNW]o ޸9NT_Ú$Hxs?ʻ:F p|a2%^^1srw߿9[x(< b/}>ݻtq܇btYY'y4>s;lOovs[ }o^ ?Z&鵴|Ӱ ihOoMYj׍(o]}aܣz6oo_m'hUz;qQ=~~=j(PF?MJoS9<F{qŨ~ru-5s蔫V6ޅָ"gw^A$^i֝H~.d~>c(q#[ǯI]}ܠAU hkwc򈰬8e |}84NpOg^}D'~Z_79 D_ߥC\ Hm/۽PN{=Wz9ȱ%ڵ$笇 QWj3I,F ]4JjG>wzg,>c sqaJU:_];W'{oj.'=Q1&Ri9wuYdcz7QךC9e45@!I81[RՖlwذ`HWuh'`?.5(pwRjкH6R! Y#Md nѠ-ޡu 9yќ9iZICj"ἹQU(;6#K9PJJ6E5]$#d  u}o)|X"(-!w8i$M((E?!b仄ŤUh#!9DmMu/ goR1\D@J} w*xc'qW;#?x=(At`-$t>xPT8xi5>B[:f%9@ҕ$kr Re1GNPj>|`a3BaZ R|" L&jZeT^[0`c͖[CŒJW>|Oy&!/ɪS[WFQx $nGeh]`¬JrA[ϋw4GshT fƉ!_[OW '+^,&MiOߴ+ΕLK0(}u滷. 4z26 ] LeVbǹ֤l/ <JVO Ѯ b P"7f'+ÁwàDlQ:$5\Q ty݈]yauPʕ<+TP=`˰aHȶČ*Az|BjJ 뛷 3K` >߁uŢqmb>YH&+c|GɟuWCAWtVNyvآqnᣆ{ A l6&RMS19]B lxRg =<֬U 62g(d8g%48g|FDhudsPk>@Měރ +wvZ;i` EZX->Mg\{㶹I/oՌG$[ķ'(s"uECHV: C@(9/@w:dx'7Q02G(I@n8ϡ>@ApaAjiT;.:x޲tDh`PMzGO4,o2oA"x.b]w bpZ$AktLy*p!0( 2~CW ^9 Ťj֡iy(`N,qevJ z*2%AŋܥM3AFGx'4yuL> $',/κnڎNN(ґ$V\t?P攺qe5RFIhPٯ7Aj|L]q5+ƯC oWgeILkGT#b6`VJžZD@lgƲ{A- H9 r@=,‾;&xgp7x5!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@JŐ8 '1 *Xo{7"82䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐:THA%~@`~@\;t9 p҂@#r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!Ёr@ZC-O 3l0_3+D9 !r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!t(ׁ޺Mq$\i3~N wH)"4 >nd%+41\f-{-E&77mrz1PAA7/@K8Z]-cbNC{<RxWP!2ӉSe7 x C,<(3G 4̶eFN/]酨|ߩvHv̅¥t~hqr<+wt -Zwqk\9__ÑZ}YJI, 0'K&=D07)ϔfUѶ"%ܗYUZ2/A%k6ߝ=dwUʲJ"/R&*RbK/會:>B_:'00>\6f${?Vj\Groyred81\;4U~f\}7rX0{W[A_o3Ϸv֊' JgQ| (W-zʔv@r 6f0r͵z(r*J]z+-cZ> bN#W\9*[T1Yru8r2r@r ~ϭeDE\e+9A:@Y;$&#W\"Wڧ2 QD HFEzU(+ŵl@r ~BqWr}ԹX(wV2:D %{tAԈBм4[[$_`Æ#\N"Z^eRJCie!ߘ%;CX*穛ա-隔oT^lQy*1NBܥVxAFS2J,t̻cGNqNِ >B ?V{ Vn~GpO+$z@r >l0ClYlf(Wߍ\;8&B[6j/\fػZ/J!,Hl!WEO%a{jVs#Wܧ^r;kwVrrur%\q6;PUO,bfmp7tg?\ <6=rjtߟV.hl(`E)Q>$R -Ѷˉ=r )?JMU{>?P˟mΚVly.ȷ=ὪT #jITeBpLȼaE v27W'禙^\w͸nR:M=;z=7c oяsp!Ah6@y{}>o.] g/FybKnwsgxf"kn)Xe :Ζ$u*k<>y^L e7/[Ɉ{XԿ=e]\ªԩW:1UPlJJ&xG 7 6j 4ɨ d;Q15a'ϒrADa2ided}:J~{WR%\3uQ9mA6=h4 m2Hu Y4ҥlɕ"5W8-xҘ-ਃ8e YV~~m~|dNn~}<#?{/A\s-4B!̗lW,OmøI>Z%fMm%V;*h\9ЏCQD%$BRi]%8q 1;m+DaRkRʤ]BC ;\&@*ZŎ̹\`Kٟ4ڊ1G'*<1_fG6b!*zep65K Ca̖5H`d6>;Aik1#1ʹʲ]%#[yk/+ݮQH$lcƌ0 HM4y냫L D9Z8l>%謰Q\z~O#^\OEXvLɋ/ӵ+?j3{iOZrE0?_s ӧ?VA>cqŋ ‰&,Q" n<3?kwEeLgP1Mx2Z}N{=˔SY&1*^B@Bg]7mA$)ґՂy6'ԍԠMcT|Ď6Psn-Bp?trtG>Y{\gwS2#A{tEj:pY(K<@)AWچRʷ]@LBTuƔBYzg^ )LSvl࿴iΡ~4;rzp;L8u);\=yi11u]d ~<2m2stǺu~]aO ?iS9{ZW[2. A!%vgu W-gyW.Ev=wĨfKI1_ؗQd q;p29J8C*4$',z?׾o񗪵Ez;+.&}h eY,l yL3x^Dv ) n()tAVTTJ9Ibyn1^9-c@C˅"ns!\;Qq1ׁۥEzɏ ݸY'V\c\b,~oo7㼐E3+db?bnIQ %ϊEr A%Ra"]XbzEڮN YVvn//A톴, d~{`^5Ȭeg? ZaZ}9L+,1YG`[XZt9…~]wϨ_EHF?7. lޏa^ulGߙ:f]=W<Ѽu:}SMW\rqT4y}0_"e4_Gճi=9mAN*{$,h$o|3;sL"6i&y ?S_ Od.X95]q~ԍm#IEYwWWwW0d,[eW[kYRH^=CRdQEA<3St*碰S#zX+3sOwwV(2/xk/~iK>;Ȼxz<$Ц\)V~]f߷DL-~=??/_#j!ը8Ĉ0$TeUFHYD糑%nO޶|5g]ʚS?^l/uF^3vly /ÇNoR/7ܔAU7妡!Nj,J3O$?͆u]?'AuhBԺG yn63m6/5ùij 1 Kj-zvwˎy=Ngv|yI;UIZ^/m\';67_ [4(\E+w->wb=StgrVS imoy0G ݨBvsuVA4Z:c%@4'fMQ.~e:muHE[V]`k ,[¦jTs rXS&49v:B^80Wü>Pu?7ލ7սW >az{*oɟ@Yoꎸpuվv הWpZq񤩚͵A{͈*=vX=EcU~^u( ٠Fy/SzlQzU۵Mm5ᮏ3'ʇ$cS;GeA/X9c6Tj2!+a&oj^ 1?7z񷡢2o˕c98:iV ]f"n{.#`eG'i׽Ż5a}ߝO9vv=-bygĆ֗I_ޯck5dõ}%fd+q*,j{X =Ctݡ^>Y-c?#Y:8Pjv,8΃ͭiqj]v9=+e+ZqL VC5p1L+  k5cC2r󬪵sUmtB*ށnldi}x@EƘmݗ<}'W^ O1 bZpDOhtL\!~*s>:Qѹ\pbo~\9ј?XJA \(:ĬCنqr͞{7P@QNrZ٤}IPaJ 'q*eg8PԨ+|)g+HK9Y# U̾jSM`7-$w#T4[wFZdt-9$8F dȵ8:4aBTsoA-m;(vQ;ʩJFf' @".fS=$9!ċw?qžumS|m'.NOשe-wwy㵷X]4iLX{M2J׮ZysN)3 kdfm¥ɊHjb ط%#_s1r5DDϕ#P2Պ3Ak+B+JfF=w*ta7Vrg]H.yeyCkr'Ę#F`u/.WN`D*km- ŵ)ﵚE[A(6;k˵,  4ن9ξo+ dWS*y3ۋ;+.hnܱWk/7i +2*L*8LncMtTd>2;CZvՇ:#hj/:E䢒GQr \mVICYNĴ)ƽ)jDY#I#NNX ۄ`wcM@j 9cjXh'Vy+6٣|虘,,8aOq'195c-DыU}uvmaҋ^buKN\w8bPɸXT>(7]}чݸc[}!nw6t[tmh™ً419IpS#%k{˶㲁"o|݉aG|S~z@E"\T^@voɁ-3U{3OI* **&Ax ʱ6c -6 hLIj<> *{J"٠CmW e!ػY9WԦQlUwQr1.cm,5۷ӛqh{pixy{,Ű\J4Vx@ͥR6$34`#E-8a D:q\Ur"A)]r͞8!q\/rLl";9+ m> }X(ۢ)EaQ@ z@[KuкL² aW]ᄽ_h; >[.1`t&i2TK),R6D bm)@k|hX!n M_UujtT) +PGFYȹG=*V;$尖28'jߨ sxt3YTIZklAgbM!ﳈz^LAP0Ϳ> y 1WERs%6F'-@խx<X{],֣ 8VZh֕+m`fe *v[QR<9QMg9F=l?1[պQDk(v5s**E&a[Xw_ /=#cIBpH;eVlS ^DbWYgDV7Ύ&?iVQ4eVł>UWc ;dKUc|D{0)>c-v}zތUG\6 jUn :#c4`Yj4&0c;I[v>+C+/zoucI,:B%4JgkQV&oA\l6''`l vm;KAy3)KaU,OEDK@Y%$L&l2* ·ձ,a?7Ыr6Ei(!ס͇5'@SjuFԓߢcFeLT2"sv S%Fs)N]v_ ྵf{7:c;!GBZ;(Wbo kxWxӢqq88rl[g&pծ ݿ*Mc1w[Up>nJa40! /v>~xaU@&;G[T);%)rrC?;ǥPڷPb7=Ѯ ?F;o#|Z)  Ι,6\ڤAjX{ 5l$#@߮I8;9gbԽ;h'\ۺ]0#G׶.lxc<1Y,7P5bpc|/-gK^>Q9Rʡ;]<`Ҵs<a{S{>+# Oa"T|}t˙HSF(krKEsv>;/ K[z|TQ{r|Ͽ;n<89]*Ҳ˳ۿzhgXrDdYDـGuK6`b708aPU~Gq-"uyH֕'_!Ѓ3 ^MbL# 'S9P] B0Xf sx}q` p(YUkvC Tc* 1z2z?Efڧ-ˤ>ҝ>̅3O6AY;6e_?,Qc7qcFpSV6Bz-V`XoM5b bA=-ë AΔ)7~ 8K*:2^C+ TA--fްNbk!l#}b,~~:;|$vWx ]ߖJyg,mޗzOөCA*MCr4dc V;5ɕ\ϐ]>A"JpbmkZ#VwmUgnMHu+}K$ádgx8$E$Ңa'S]UUuu&PfT`lrD`0`P;#g/Y$b;9 vx`u+nܦ]G,K2?J]9#^_C^C>oC ˨'uiuYx @YL Nc-3MMDd5oQ9s\'__K` ;Ix:C l`۶wj` 6ƟQ\"rx2{D.uN]5IL'j]Tl,N$צD$J+srh鵺'?U~ KhǟZut5N۬;r6sвeíwjz^~(%^ +vGB3 ӥ# l$r~y/D-痎4يiZB2&X*i Z5;)D'1ReqEc7iBvH=&[,jw*g 6\h((rm{AS*sB8e0Fs|m~ Pg2DO" XZYh6 ^J 9IUR-i-`S.Wm2]Yr]_ve ,:]'l_R kU% ĖIQ2a/ "Yq-}Oj?VVZ3Ox02,)6SK/P &,uQ(c62L:L1JˬJ2R:=M2_ jÌZFe)A@Ypyuаj}Gի9?xڲ, =tUӐAt<.=XQp!6,BBIf4ę&8bu` l9^νm{Uꑝ`@r SdN>JA)I'µ;ƠO•.g+aU'lBO\;ȞȞNjA?Nc,Χ S:J$(Ҋ|?/kY棒!W\t t6j8\'\+FfC?n`#.f0*0y^,i]͓k ~.O~~_6fsm[gs;)7CO'*h{ktIk=HAO)r e&]ٻW AmXb>*DFҋ9#P b)>Š/TiT3n)=/~Cث\:.~~钺DEHa͚0GuDW"T!UZNb_?.\ox B~y?K}ſ?{xo/u& Ѐ;enj»z5ojv5f7yr-Dlgy:z0䃗 wx)}>ZD&.F+1!*-lx>SYp3.!6s;&ʥu0ilZ R [ r{IQWtw8L0+ǣU";kH)'Ԓ)E 0ÏNvΞޣSܳIW]]rt0C5*Cz Y@SJx-UPb*AixΉq+ E:/C?NHgP>dqRNG IwmK_O~ |I/!wB*a񟯺𞁳)CAz䆟FyJ B'T rv 3 :TH\񆖖S vRJ.ßd`8+K?0q8zL2RGq-VN our thp_r=T9K!ܬsqȧ"BFP!< IIb`M01ҧRo>?8 ۠oAvM>k:je]tۣ}BM;@`'r:ʼnZnqrX-KH] QW\"E]%jwuGuՕ c}@*DRJT*uTWP]I؜2gΟxQ.΋EoDeV3~;R/@3D)~_`\70`;%I |-A*N2v(j:Q+оDG5մ\`7U}+9/{Q/S߬P_o:Z#`*Lb RqXt$>i`<ڣY}G ݃:@EzG#è"H:>C&4ߨLtȄox LqˀY,C2NcʠvbYċl[.+ ;roܻr,IQI)%3ZXc,%wZY%qeve[oDLy ۫ hEՇb$j'I~@ F]%r%jT7KO/~//~zz헺PW]z5A3qd\$[k.C3[LtYr~ыB8 +ӬdNR LKL2f4p fqjO_a?˻|]ۥ1Mo@Cd17!<ь < (;d60ZJtܧֈpe+-,R"]A2>хk.aM4ϛˤ.Cpȥ+.F*SjTD̠6JcEIM=*gՌ=Q9hM8\!+$!D9 oT;ccQo()_KImCal4fhkKs3lT[^_?;+:l Q4Yr 4R0VR/="J \n%ịUh4BkM̨P( F刊`r:A Utt,^}ce=`N|겡ShHbS䫤+UUWI#>Ʋ+  r8^D.Mr^DGkjNٽ#lqWG&ǣGWVK{ڒ]z;5ez#QHYfyaj3jȍ{-#cFs+%c33r(}`@TԥKf`PiTjF 0ߝ!;F]oXJؕ~(;*>i{}7ޝoAZ {ݶM5:FnAvt)|4mN΄8}z382d0"8'Ktx J-,EdQ8 NB tDq^vP󢙖ڞ,fސ_LxTKzȋ]| ېN>K3яY\ukzNe2z*2k#B2Z~9IP 5%x: rwXCkLʑU[)r&E^ llFɳ<[ʫd 7p7qNze}VlqOUֆ] 66&r%iD7(&yaVwz@=br0->PBZezu0jM6kBZ n}=ϵ\٘7yPțCk:n2t+jMYs2IMkxrs}'qpPk|N09 A ϥlӶYXzC)R\i?:!͘.%dQ`/d%as648Gm5cOt-O&(* JE"Nʨ-I%Tѣ`t}S/2j|iF{< jow%VIQ"dYYTPTKsBT͙GD>u55T |<>]p.j˔Ֆ͒-얣B+mŚɧK(w`RZjiERq v;ɬ{Ίۖ'+n+Z] ]ߎZ<[wmi>*#B%;NUs*OUYSg0 @0IWنg]ս.Q1eaBy2}gtї^xHD(44u|e$rqqs.AE`jTx-<%Q4IΎMFvNgIAĕ&!Qh[i򭻻Eox-e/Q̻L1 ^&:RJ2΂4dAP Ίڞ(Hޢ ֒ aZ Z hJ5r8dP\fBm侔+B}}Ϡoe%f݅}ZD$+< "r>5]&&&Y*i9%'!YRM;'$BD.rRx$"cDuQ4#H4 hYFy7䌢e+)3`5EQ("s2YNw;w.?;O0)33V ͰjǙWwVv<ft6i1 ]CO4}3u %=1Vuh/X!tZB?V΋3=#j2%;ZqfґGгe}oQ@Jh=QX%6~LMTӍ׸pV'9*Ot2~N]UBjQ7bBbxxiy| tvjC_~5<{zn`4>-n]L5ZmQ>rr^_8Yf&C3̅Or{ #E2 sן/n/p56$6$.6 6#H,#|ʮ޵߻-fEw;kGed4gJ}HX"k0`(b0_۩N+ sTui_/83Ű>p~%h'tD܏(FżPfYYQR#vEϑIg?g_QfN?9{%{$xa (=7:uljho647b9me\#o_+2XNx.G6*O`-b}&&~E )JL@&|%^T@9H C5)BJr7y\7a@=՘24i`tFR+(>x9FsK1K3 <TilA{N~6t}."9z'l+7}VˇsC5KevzCYz{PO*BQ]:#~ELi'c yGIf6Ri2KpF3Z݆ -Fs;A!SZ3#Sk4N1B=2{ƭAqhF8S%H5j9o'&%x$KAtIF ǜ|u9zu3{a:8)5>db`4pR9n"Ucc#frhڷ{Ⱦ5hk~3=x +S︱zW16R4ǵ D*3ңUzvKМևLbtk!)0\@HfY J] kA<Kk\7+`wO<խ*Ŭ#{],PWJ:Ez";c!`?VE"0aIUR' KfkiX45n0`BTvMzp%ZIX)æJxCgb\.) (QR8TA:p+&)r2qXR2/7{Z VJ鐰gUSY_VRϺJOxaba+b挕Q0n@ eIseTR1@A$B嚌[ ' ܠڷY.IUBD.9M&k 5, @H|xGHl2ϛo~#绕Vl:|ÌnY܊/?A#In(rxFȕ2QIxJc4,[)<ʙ쬣6WEFQFxpɛ Rt-7hyHL_12&CCwM@QӖ]=Al &^jBgoA\{@hf%-\p.Ikl JH~9ARHVߧ4Mci9pŖjōLNN7Ѕ_|I: Y]ŏU҉WEßʐ=^7ݙW@oM!( O|z 9dl}"WN*d0:J~dT*7|k(wP7Br3V]!0OּvϬJ+Y%yUv7[EQ82+[Tb})\ vZ5Zmةd&*g߻i΅/ǝf|Jh0zۏSR,\9T1cxr<ٚ&goN8*)=-L2+ 1ʰC=b,RTӑH~0:C9#N?&"Yaɴ_p4  G~ λI1 쪠YBo2m;\(%G꣜۞wH-KKO󒏱|WOi ˛)f)KBxw{Pq#輻!.n,^(@5j (CDM!J+C.D9N-XSp9HbH'/T6z:տ/2!7SF;|o5R`(ⱪ AJqifϙd']I<9?89sȍ K逶O.3Gm^m0_T)/xNQ6Ox̨6Wec><ܓ`8~znSB[naVw<a=f7W]d箷 43.v݊Y ȓhbwwg1c4vsM=bjK=$I1)Idw"+0꘡\zedݥid}|X^E~}^E 5 î.BrƩ,Q'f5H((yhV5"2W]ą/5+^b(Im4ES3& t쏫*IT4U P %/K'-Etƒ2js/CCP1c!P?u|L:i19$`"c@5 ZG@\ЇBwP͵-[2W{xl@D|g 6{nb>Rs8ļSz)ʹ˸T)A㚊Ң-+TH]QE:=L\y@e[AN@u8m+N)#I&pabN@EBT2'-u6TdtQ(!j %Չew;+~Tި옷b>c=s>6/|6? p&63K4bHHB Lh *'Q 4)6ͨ XBH6LgI\i)ńЌKn Zj^\qɄS65ޏ K?nUׇr{>7?2DIq;!MN*gOkmwpU=U4io8r}diT* "ht&G gi(|#ShaiwmH21^@UݫM]]*utJhV,>lƐ((14 mbE搜O~ze SRR1 ⣢ K2Yiup2HFf ֌4fq,X O^}SfxJX*mۄ?|7oj#v1ʬ|$3AI&)j镢 &Y+9B%rbj$/m:PP@JV6䁅h v}Q3x[R."9lʜ;.洓bNyǑ+Q3}kZA@a'KypDB汀2c*EF$[P+5b :f8N:;"Y&đ1_!8.x̜pL y+0N1"ʎx ŬR *;@yЧ0%Q'CCUe]q.Խmg;=J!l@K_g"Ymię9bj-G̜q"z:yɱq1=.u-<1DgAxG5*_B)CmLʀpA؎ũa381qxx{b8F>9xg\)(ՏT ۔7x˗T*Uu adET~ɊoeYx}'-5l_ꙙ|jOaQePQ^)/ V*!PMZ4dbb۩RacqN%=ip|@GFM&y]$džԚ73z4Jm-|jExp7K\->O#\ SQ SrAșyUK1:2ʲH6@LDWi BL5rf#y HL,غ94@'<>ֶS ƒEA JpT&K&tBb3eUBA'FR=XN,U1fU1\Ɗ> 8Hfz'jHɠN6%vCJqX'UM_m;p >m /ɀ>h39f ^^S|vry{9>1Ls֢w'n7z"~n|mpLT@z{>?n0nB`\z{xhAȀ-6 K: ! c{mK@D6 DYcF>J9 Di 7Rlf;C'X&/%mQ|B4DnxAMs8{O$ש$ɑ X+'(y4UI;G9b.rf5ˋ"bʶpl5PZ hIpF"9upb)tީ4΂MV Ɏ%BJ*j[x4X١1kng90Ƹ38jȌzIR6n")HJ H p)BP=Zo~fnܒì!." #eɄn)ǃDAg0֠ةV3֗=OyҙleA(|@eDRX (࠘}^c# 8ǤD->?̀GC mq&2*| #"&A%%[ Z&fhk7#¶]-PUAOyP3J!̘C%!zrR 1[$R袵P&|vqC&fqt/qjU. [ni:RH낄^lL&Ι\?F2_o;՗4$0~xMo=KmqI!臢Lf]ڛK47:_a.Vf\>\̮㰻uWX?Vr zK:ھ*A >s&Q0Z4/sJUB4P4!x!ќg;]h/;THj\&nvuI#<7hM:~1ȓ~o}Gy6?{Wܽٽ%vl/ߙwYw5mQ={Ǥ;9M#)|6?}LYwi0po\z n\e{7ot;{.~G~z3 9;ŒȿVc`7zC7/6-߼_[XU[q/W~X~4?/8>PeO?|O aNk54+%ܦ'yq1ZffYxL csn7΋Wf-7FV/TWA2<>Pnz fl7kn%hw{]fo^?86_j{rOp6̟Y_- ̾zeW4j3~ُ>.7_.~e\^ζ$_GN 8YfJGzy37I1h]Io U>J lF:ͧ>%mx]琗fCЈR& eK{9WM Z&JGYm=n$mХ\N.Ihw۞>Sb-ۜi@b^~|9>b_a08aiq@%fQE$yթoov PbTd'Q)r!+BtE8 kl-BKP$VQQ2#iO;6|$`(&6@(Һr+sq*呥 wyܫ.9<-ۼY?X:\^ÿE4Y)K?^rl.|y*U.ЊnzULn |xЀV !^$S@k30A e<:m= mT37xEGO 1Ƈe ix\Ӈ'g_oZU S -1REPLq*uH2 U5l*QLm*m-\ݭW>K[~Lk +9dV&'^w3R) ㇑ytE5X#=yvFXlljʜK-VVbU+Z/ˠ6lU5שsj-ክBtʢ*ݫnJ4@r <޲ vwQ OS![W)Wow1\!'8} WBMُr޼nr1LWRkWmZ_\*Fwœⷛħ55;f&`l1G~+@q?׳l*R:NT+fN؆g'G-g~/zz<;}풸]KvI.%q$n]KvI.%q$n풸\O-r-x6 zl7gSLOH5jngݾ,!ߎvJAW ]+x =κwW ]+xw{wW ]G[W =:GIa۲GmTZ49b y_@Dc9btW9.,f3sd ~ux_Zm5ԭ<܄mXk!fCdUUK>:|&?6oa=rHm翴=3Hg ub ׊!"ԥbL ^oɍYg3sl(dO*s|aT[?P\Ҳd)$'ZyɩbG h䄰p.hXd!7Xuf+m#Ԇ5nd2cL&)m4"gjS@mfE.kaɴsQtJ]1.X'~y}PR۟pg\}-몓ǿյk?e?ɦ|~agP=D} A.pHTѡW BSd QQԢ HEKГ&4ꮚR LVZ4 R9O#c; yƱXc!v,<)XTm}U:JӅ\7 rbgeVbLPI]*KIILhceJ IKۥ6d/:FUMm]-S\N/ |oKE$:Z4bŜvR̩v38hJt 1- *N 0PfLȈDq{QtDm7c]HL03Yt1Pɪ0Y&QBT Iuvf&γdtK %I@{aM/\G6DBb")Z<锬d"s>%Yrpmd I`B R)b&YAkH( jE\`I.~1"F$JȜi킄5Fzy{#WwwgnV ԉQJj_Уx`|0ओH' _R{ 1Xon+TIM6 W݋4~Й3q^WG3*=# t))IM:^ј6#p`zE\I{>2cI,jr-(5'7Bvim읝޸Ϲy?;=X[?MgMn)oi.trpj^c4hQܪ 9yj5siJ,6߯8/2 U&}YӴugҼDwW7=b \O^螝OV2tzOW{״FRGRzHkQ\6#)N`Yr{7-^vLNQg\7꺹rFŨ3)kbtMC;e:GӟXWvjw|g}秫OQ,IZnf!{ĩafG:Fe?ovQST9x`8'S&;;8'!?7߿ 8}_oN߽?{Nh >_FS}C 4lh j%orf\׌; &b2[h L |y6P@}Q%AhM|wq)i "6б}jhK뒟ǜh[鹍 ͥ!<[ (M^ .y'+Y6bYq;s@Mo:#_ &uS<,wO?9;ˢG(uܹ3Vn;qqmE$V: AE.乳{&D/ ǃazJiדvAVh9iLO2='1g椡(@dd}t4,: 9j0$F/vY  xgqnܣTr@2D=%2Ihʑ%1r6KJuHrl'~EX+Sl`Z]W4Kie*X1a\JVUZY YVV)f[ͱBs@4GҼ\FFI7 <"3ZrJ^?W*KLL`@jS"Fg !E#Rr6s@Nc0 i"XN($Aq4~yRXpBizcX˷L*F*ׇ4ܗ|N{}U2I A4ÞF04< wLé@op\DWOxjVt8*gV(`-{ %:Dy: LeZe^l|lSTOITbjU=/[ Pt6i_p¹OfOdͣѺIܟnty^.G-eYտUtS꼼-gTWpAҰI n0=U쾮3-p(c譪Ǻ@7uȚ.nElT3YG=x-:͚b71^}es&,^5߲]齭|Q)OYg>5w]z$n.hl?RZ/I뇻6t:o!w7:rϽhZKsXunaZK۞,mZ2|ie:wfƭU.YcUoǜRdJAJi&XB$-ً-NCo/GѲ[ lAAZSYhmvA"% P2%q[/܋C{q >tkO]9V>b š|=xFGmU`28dY@K ! Hgc1KJZV1唙bl$qq+mli%p$ )yrYςW, ׄD2CG8Nrr2Jl^BB&a2 a'$[Yv VZ%VZśPR> PtlDiFr<@U = FiciSM9մs֠@K)xVDN 5`Ocl*b{&g3*H[cvD ?Kƴ$3)Av-}{жplўБSVUïXvTb$r%/cVCXTb4[zoεm_vj({9{g86կ{GH{:{*"SNE& yj#]Np"bGmtU#d300Hݾ#.ŢR8hJ0_-)?wL&R#uQŪOp="~[P‘|8c;:gk؁G|~݊hrK{f# Ył90k+QjW+&C jΙhJ&%Cͤ1lz:hk6&`^{ .0M7^?߸ƗXn_D]|$͉BEr"Ak-[3`]y떄P/Wp1]NeyO)ӫi,^,5RꊗS@YU V)D ?cMoIWsw8P?w>6䣥ؙ=;ۃ?lrzܾiGU[9kl㔅9idC]э]o]?ZTC]hdt[3vz 'DƬp(x6m7$*dcx^ɇI'rA_d|57'c~wsFfBc&קhm;]Z XWw]ЦϼJ(Wj8*Z+&* ܉V:a.p/,RQEU,U$UtGkGpx~sZ{u`[ `QNN\3FPe@Hr1Fzd~W9uo~4h9u'|sn# {Ϫ={zҸC?薛?v:;;\iZlN6AA?ZpŝiU.\ZLqŝc $ɌpE3 Hf}UҎ ,#\`j H}tlqd+m}+Tk;Pzq6v,2]\s}M+T)q5@\N|֮Pf٬]Z{\JGWpkn[R>S.7[&ū-Ǎic]j!JS%'L_O^M,jnΰr6Q%%3!&\2n|xq6t֔5^Enr^xuO>p=b~v&:*_g:Is~\JiUPP|Z4ן2u TtJ4gxo>Qt^R}N,EZr̦GUQz;p三gBrE6XqE*ոY)D*s:U;㰘vjUv*u7m \WV=`69<\qH+T+qE*q5@\ --+ HW֨Tqt5D\IitE3]\ H5}tvq"2TWCĕtzvNp3$FWW8Ϝ֮H Zsqp.(r•r|prEɭJvR`qϚ2qHftj}B Hg 'Ү5@s鮀lۛ2nϠֺ1׉e7Hr`7\\.Rif7 p2 v<H+Z4T+zN8W-;-je7-UB<8WcG\RhE䖂-d+"r;l-Urq5@\  KɅlFWg^-Uqt5D\IC>"*\Z+Ri䈫 2'\ஃw[u P|ꕶT 5jRBI2 ٌH4}=j8T19]`!y2HrWWR27j2Z ȑ nsTqt5H\Q 𒦒*07aI `P6v;=@ٴ,jW0Ճ { Cy}0РѴdmSK/Zwʻ=m&kᲙɑ\HK:4f p&a2 x\rub\'rpz~¤SRxOvr58ZёKz;3)W|աUA*#\`,\5r]."}T0jŠ:'\`]oNS"F\ W.#\#hZftj%}j+ WVTj3jRJ=o C)_BWĕVFW$ؘlpErW~ U 6+`o W%&'N4-ʭ$@6&r4uFFAbOSH^jaVKuF' Vʐ\r?IS?P||I*eNc紈!UU2RW_ ĞU/NvY{qBaj%yS ᪝ʾe<-p%F\ZHatFBϑX"+RTqE*G\ W-s .\\WRWC돫^p?yr)Ǽ(Og3*1h,X_́/_/l^Mm,QĢ}}cP4Ge5ҏ5BQu Ttf/}5K/9hO3,|r{Mn2 1]k.>L/v [)[>O{_iз\n{vsf3l'lw4zjn7_>?|Q!" ȭʇo}eG+ ~t|k~3y-JcAycv} *1<,ᄕ(ՙJO*[։k+W<ʁJjm=ׁ{ ַ_ˍ$DPD2k?Yx/IW/ELע<0xdʠn0r^I*]) xT*-X'*_I*rd>2PB_UQ(E-ȡ6,j]7+!oa RE4΅.bV^UT\AoةN`}L&%ZV"֒bp qň\֑Nz*m*w[ksr_]j-յR!VBjS+93EJJV8RɀSOaDӒh>=lsVؘk!4Gk!fvxodD] Gt8Z XLH}!3јhnٻn$+N_ ċYg"E{3<%(}N)ݞTXpdɓndڦbҥ逶{ІZY|wdK*0&_b\j@cV91F<]U~b1*BBqx/=嫷E_bjSEKG:SAd*X C4*ܽO_ϛ{!hUU oZϩTRnm5ljxIl1'@Rc;S\knws=i:wK9jsi g16c@FUyZ͙ܚ`/DmT/ *Z}ƥ/-@jk\b}HF 9yR 37g ,(R>` :`՞Jb=w$C.聼0Q9 _&THd|"0`)`fR|WZo h;o~lX5 "(e*9<Ī7JA[,X]A8|a-b"$l ]K ݑixu dFCVj;xXUA|m^$uc#6JQ ̡m*"]hٻޠ(*Ȧ}` Ú` U#\ "U+wIQ22)LHpm/xӚ`AV5*(*-"w@iLb oDXqvL6B@F=5(!Ȯd܁E7BnTzC{- ȸCLAA۝!^0,JLhW4 ~DUuASPbt/XY_u0wTL@TCJB*)[؛ycu(nL6 _ b7ڛZ(Sѝ)E4GRF U [i՞%@ Q 2 }APS*(HnW]T{YFWU]%׽6= Y̼cIj0 !ѿT(`5R{%CࠄfMOM2+V*{M/dPgF\B+^ܡC{ߋqJb6(!9Yh>3(RAUvNZ'$̿2s񈝯vߙR/>\5MxWV11󃳣.0QCیZI|tx݃Ky@r *}tl*MT$];-6*d`fPDd~.U. Ao%CJ$JET2R,zGpKRQ &(!t_15lm;Bdm:9ªJrC#QhzBb9¶l^ A; I__n^93"d*KqKе2<2 A]Kr۠mXAJ<.J3nD4eP0F%\)䌊 Y)֎ a<Qy@1 bu5,hɭ]Fp2zv hu/FpFL"(يG+^ca?Xd~r$12iU/_Sd@A5qDƣ""LC כџUeXEǂp]# BPʉBCgj^!]gѝ5MVU#YA jfᭇCi*hh[oMu^I"{BZ&B[퐄M٤$E}>Vs`7\qjh}6`_|^m^/eXv۴~Ϲ+!v͂ ]@H77kF֞5`s˿BggUCŨծ[s5ךcfԌNUߺYal+4O|ۅ+Ikj]Є:HrR1TDyð*a‘r c,1@Z*(.FR<&۟A lU?<66hwcƀumlbE@:Xf=Եj@רM=t&1jGb+Z{Ok[r{[<Tm /fxs9G],z5dPi*V`)o"a2نf@2RM F:,`GIzӠlCi#8 8%C)F @vE< \T*\ίR- -1˱XTZI~p/")SJ,vR@ǵ1[ Y\*y׹]Ob!(E%蕲`j?&c?\e= ;2">'lX ǧ#g8}O(82,Pkx?JE\-=A}qO#6?\JvmA6wGkvvsۤ/y77 c1?l?|қg^i^*|??XwZ]'\\?qg_Jg;oB.l~h㮭7$'-*.Ùg ;J38*;i|@n)$> H|@$> H|@$> H|@$> H|@$> H|@$> T}@gQ 7}|@@2$}@(ʾ|=H|@$> H|@$> H|@$> H|@$> H|@$> H|@>4C_~:(NS$> H|@$> H|@$> H|@$> H|@$> H|@$> (FL>ĻWi|@@kNdƋH|@$> H|@$> H|@$> H|@$> H|@$> H|@t*>_)t?3~;a`< #f6? ܶn,6ʝ9{z?jCo^`p$ƪ=R ݏo7bP ulen%&^~l/ Bd ?6KLPw/> ~ܵj^׷ >5:L}PFA_o/dž`?~k %B4*no5Lѳ5f_c]hz1ǧ4;ů;\=>N}A9 i,_Rnc;*h'lGv߄Ԃӵen-)8Zj& rT& 1ȏH'F'6.]XW)'hM4Q%&i*9Y'g{%(B8J.qMDWͷ~JWBW@K˶W}; p LW J祫g; 屝ЕzktX3 ]1\ QF+tuteMODW 8CW&Rib;]1ʨNlι'[~V+Fݱ4N($2f"bNCW|{f:vb$E`"bQOCW 7Mvd~Q(tutSy֮nІWW U[:hbF[~ /9As@}H&ތw.z ʾ\^:`׿ 2Sq? yhkqZ-њՃ|<>ӞqV,њߢg$ȧU'+<=Zf+FKGE()?A؉y(5 ]1Z㏝%y?]W=Tə#0ޢ? .="ah]z: 僛"N= ]1\f+FKQ:-tuteRJLDWV)ibBWӱ NHDtEjm:zJ ҕSGq0OCW 7Y hVNWҒ ҕW~ӵ{+QBW^ѫ+$%tut'ɨy4;@/b$銂}pТF3 gN6(K^{F,8]ϥg+4 C =M4 ]m4G\(6'Еʡw/I1g'GY]*ɦxcSWrBWzmؒox:]%ToCWw*\l_Q7mkKps~WWէϗ˧jpY%}49;Lq!ڒK%sP*>wm_Y{C@qmqn4h{&ArX,h%;>;]ɒm=,U.rg>~3A|z6 2+h4.Տoۉ{AՆoN?@إ٨Z:m^3M1JRC!(6.M(DeXe"D SRR' Q.t!{QԿh {t3fkDXp7Vz ^oƐ՛kUmk x釴xZS8?Oo߾EȚ:|˟{Y?>'HTNh?a|]e|[koj_7Q> /VQx!*3ۓ|sדY\˃ΫHzTB '!EX"Nz!Q]"z}gSf{TlKIj¤_zQ\`,DDŽQAF w"15aGϢrABi83wS{nex*xy: R"_1z$ N 5*Xi˵$"i%3UaZJK?%Yܴp:&sMQ7KU+H;dJyX]-[>l8Ds{M7r4]ğu"_x4jQ] %l)KhFE+W:e~d|kQM cH=GωHA(qym  6v)AөG< p ;G2*LZMQAA[JV:g GK9Ks 61w,Oi͸SL1g^Rl×4/s]P4:\cD%هfG9oV 0sɲ.Iy+aI]ՊQ愣=0ӔO>d 0Re}eʵ̫`EBL<t#GG7TZ 􊏒_g0-k3m㴸t>npZԡ;.a4S)xMuў?!w_4 ?Á]MeqZ鲴F4] tL⸥;!swzB)A'mC)i`k'ΘR"K+R5lſ?hW?OwOx#;5O{ͳ'M]{Skb?,__oH,li6kag¡OW׳ х0ckn/.R7Zoe7nxt澐3 UsvKzS.)AG2 *+߳Pu+QNEOWy1Ctk |w@:3uD~F 2 PঈPrN"DI)NĒyyL0A㚃`Z$bx/qЖ eV1'rwWbKv̋;g wOFA鐸ُcy q<<1?Es:m~?#qخ8I t [AH+A0n:-6P˨, #i}ςY[5>0pþ͸$)FT i*H 48;6:Kx!Nj8:&50K-h͙DLI<՞J!p.3qvyspm9_eL>o7ռ {;j64jx=2L?9hē`E ht ^&* RÂiKXMpsD?$Bh$n1(#:K *Y֗R&8b pq;0+`;lmp}/s>Iނ|6:Yㆀ%\707&9hKEP3,>4")O<,>)rS\j.H?sEZUxݠu_.%5d}?z6K|ctѻ9!sxwSqgY@}Z^kVdUagq*{$,(Is?! GübpVOPWԋR'(ɿi4rHgMԷe"(%'ꓼר-+Kn.|WHOǟfsK?^7Qs,|S|H5 G Qz/XI2(G~CfZ%~ҷHT?_ͪ"KéϑXμFf=]a/?RoR|@ 2 wKZ_ϓ\#f_Q.8Bb*[6imy/F#2Tc-3FꭁI'$)yY:a-8K)Qv:ysmE|+u[|cY4yvN2p іsfNG6k I4ɥsʁ$)}*&AUI !#zDcBbki O]2;g{ \ACZ_oƨʼK+>|=_q\պaǑb~! #sy1Wa=nP'4^FUEE-E4>utɋ#kn)Ny3RJd:ho˧,RF8L  dYSAnct1*!jc/( )KDyJdٙ8;n)hϭN/y.&rh-3K5bHB!L *'DV]jD[FT$LgI|n! Լ|ή ]hߕ8 5#-ECWg6>dߜq˾ J׷y΋*58\dW޿lT)*oO{ZB)&]QÙ@7ϧG$!B/v2eRivetHޓT)C#5$&UQ*hFpθJgXؙf싅c,4G,|uyUYfk&L;*B۩ÓnΞF (%з#&&lV t=ѕ8ۏb jw{6#ݢv[ v:N 0{ C )Qo8"O)& H^"3FHb8"F!I ꨝ;g;N|VƃYҕ~싈cDGD<"FQ,xǢA0)Ѹ=꤆ DP4 3v8O>-qvZC QZE)8IiA |2KD0'̈́NPt G\QR3-yǸȎxM: =B;  rb u6pi`R:sagڱ/p=͞¶llU[_4WؤQَ;$P`VEcY̳&*@j&/~a4gZ{X_!-Ȏԏj`dA.KAZ&$%_`{=̡hj(Q9tuS]}>VsM>w?<-i·b/g4}yZ8?O?VUϳq]o}RY9 2+4[VTf_~lAŲVEB_0䏙Ðm:aa&onֻtݵڹa-⼹˗O]bΟۉv~w3d|/ FOn%lGusˆmc=MX>?h}g[g<ѣ?BJ@kY8<;zYVkk*s" dnm"(]}Zf/)2jQkAN-Atac͠cY*/ 2CͲV)U@Y&LZs($+S4]6DmP$,7tJH6&6;!96*:g8I5mL%vow7̷:wKP@^tPUkӿ'F'oth@ؐHqLPT``T['NH1 j{cZ$I; @(,Aj"/R/me%s"-@, YCF~r*:-ľwf*U<`Aq|)c|oKO'$lՠMk u,Q'C)%TrD(9K9 ۟6jf8RnQh|Y?8w=vUv[ o7 um7G@[Yz_u-vي3H땐BHx0/f_g\=*lg\h$ìY)X=iWOٟDT Z ʀhuRLDu.'5$x}Ok{D %iu[JD>$ 0)D7Jdu[Լ/s$Z&׸0Kz x*XP iF7!^͕V4īO]CZiq5T&&OVl^ԉIX84HC#e*ò8 |%}ߢW9 qW;D}yU.m+V>j ~"v7`8y;-IW2e yeNÃ2Q*5jѽ*R 9Ѧ#acoIp>t-n޶ Wʸ8ah/&4UyKLY7RDl8wH7`XW 05SkڏCΩ2vR $FA(EmHP ǐbr>oWF|}Kf>l]

    ݩs 0K AaBe/ G\R"ig׃ TXޓ03T $_B`TX2I~V3M%Ofgx6.J׾{oֿ~zE{^10aF8@w?MUs,۞MuMDݠ 1*C xQ^s*@[gY*$euQ@9 #4ʂ< P6E $z2mw) dSk<ېZYJKTj&*3͜=szN lT4s})h,RDе5h (2vzO=W])?_9%Na<2DGi_ao<sЩU gqj2_3s^SOpіWo?(ul5tL#5OFοN1ϲbut?_̄<n˪^<:Q̥}Fmghgtk.iNf@2߳} u6-$hbP hר0 OrWܳ*8.:^lW9jW;#rG~p`h}?Dm/O/ |=o?8ftl2_Ou^yZsHv6S֞t;MMWsM>u=GdwO,g:p~G7|t{fy̟8昧5}w>yѺiwޯ|w3w>8~;CMto53yBOnZ_~2w# <yjzwKfohLKdYaL+eڔRi{Y*M+䦶uxl¯prqtR yxcs!{N")o{*l9B.N]SJ(fA;!E\AN0I 1Cw^š!9}qVrDo'ϼ}w^`!qP& }AE1>e_=0.&1xJ*B~%ۢmL \."TՅ0pZF+kH4SԪH}fNA]W7Z,Bu߁3W︇)w񽃞`j:e@ؔMLLr[J ;w]{`VlGVOĐXtY&gñ@iBXܣ:',N鰸lN)ac#qԒԑdE lбX}Bd\T6g!j J_Hj'_}@H6٦w3D# 6;+ TX%),V{\^Rȍ= N˞4{o'Pz ?pmz/ZZuYm(w\t ٢V尝 +rg.+sQs˜K,̻![%k־\0sMc/9v֖RRKJ1(5BTrja߶U*D(R!FZZJL)0">}VJʞ;{Y[C ^d/hac:bX8hBo@n/rA&NF+k]M{<,8e]|#S3kTOcRDFS?L ZWK )thW?ś'Ѭ)a4a= 3ĤD@b#"Y>}~ ~FbƔ<1eD8x1$(jUgEs̯~ }\Z9/pը5홳0tZ ²/`@5&h,20ЕBae)\k]^PzO:U ΕR^*nl4⨉cE 3vtƎ;:cGg3vtƎ;:cGgݢS;:cGg3@gƎx6;:cGg;:cGg3vtƎ;:cGg3vtƎ;:dsc` NcN2kc`5f0֬RGbHPX*X3kc`5f0 ƚX3W`pI0f'|\g{q ] yV0EI%3ȖEmClǪeǶjY5Z+GV ,H#Qera e9yő B,HeJqTQ( Ȣ`ZrX+?B"Vz#gbt] t.r 9ޡ+ե;LhO Z,}̪>1L~6ljs8N ~ljJ2 8CLqһvi9 sXl׮dca=E-1ri/-)5G*"ZPl&abqh8DYOEKM8 YĄ0& !ʡ0`ˉl8I&ͱV $2W&{W{6- Iv<>Dvw>hmw|i^[zD(Y-S3ꔗ+ A#.ac%u/P5w;y-Xhm @U0hQ9b0XÕNP*:dz{oճi;ɎFuoyίWMQn+#aQeSjUc4N]f%sJ`ZZPd1$-ǃCH!D"Cl#  -A:`7 ᝲ aE ̦^N4/x$ )@ 2LwZ 4ye4zl"(hnDwFNbDG֗4^*2cTXH6` uvq_'+>us櫿"4@x놕n4tZך_h\ڑ;NX#wr.1<zvL#sERɲޮvdkl]}ؠ^TlGm[eƎhQЊy$(SP|}dS\Pm򝂢!$7zS> f}*r}UR隃JK\jAEMÑlVCY1(Gـyy|Unm |9֬=0n:Mw:*kl>2J`c3IQ 'h4]͍{TM?Mg2ɕH6,Z=^Cۛ[I`$SLa]L[w .tTlJ^= peޕ}4 d %˖[QM=j^+y;?$WNw}hW}އA ͣ;*n=b&ZoAZbuD\Y1=qЉ{ O\#jۚ&{ڞӦ1f;}UZ.w\)ZDi%XI_;LK3-˴t@r ʙd*2`SEbD`I14J*5Qea䐤2ITj4v'RP q`8-RLp7m_޲]_Gty\KGmB"L*h)O0wAN4A8*ü PHd,{JvS3pf ,b#!YÐ gi 6'$z"+# ·-{oPc + pkA,at^E{јI&rtTy^j>ǔ0 +֙#Q f(!bͱIhqd uμEZ4owew)<< %Wà]Y@T6J!^ZJ+Ru$$et'F:xǹb V8YҜHZ[ɰpDrLdx0'&gfu@*۵Zo.^9c=b>xM*7}@9e|"Pc֎H;qѻvߑ' zPP.Jl(CҀZ*" zsV^!yp=r],ʲ|tu9Az A29-@3K&dfK)!<$rΉe[Etcu7x1?.*{;8nJ(w#qjkbVzyZ~׏@4b`.$05PH%k#.:权/Q kC`Y" w{H4ywDZѻTj%v>(hxVᰣ""ƘR`B[1GWıa>%EzHhB a5%G &M h:p\ԫ7kAF:YYҹ>V .x Ru/wj~jOLV[|WdV[#>+տôj=L봆*[E`GB%(,R7KW'm4k6/bh}5WR],>";ζ9{}g퉣b4ni:WϠI$snɸMAUpG^Whrx4@p4LIh~nV/g}5+i*ǓlFWa6x;sc<Ig ԷއjA(FG#ep4S/L/MRQ7.Cy۫i8+_|mXªJ-.a`ⰌR LA|RNF)ޭϠtz&#X'-FJ[)A10;$^ dK<Wۿri:"lt@|ܚm$|`Zqǎ PpeeP7y5cIطR]e'  vObҊR`V/X(iIbE ɚɂ'ˆˏxMs\ -2 cdbK){o_x63n_qdb 9B -n2)k D-!(IaFIQzR`Iʎ&ٰ*v|~;l=CoD `ɘdfEZ!jȚJԑP'PD k/S>r*=-L7nƩ뇍:cߜo 3 c9 O'tR2xy8>{f݅FtZ8mtb@ [#)%Ez)h,0s e&lG Xi6RlIWk]K}vz`if MV\4u-_ }:̘Wb9Aܢ:ͣs ~o^S t刺UݫyO. 1 ft f4q ~&۱6,Yݞ&%]#ldVh~K_϶T_nžWzv{"/rB]C=&,tq{_ Z&ذ<z/~7̞ŕO./XN䬿ﹸ<Ζ?8 zޤv\{^5}=am"_8?q|!_S |ᓏxSW𷹥8ml@tKqه סp vǗ55 TB) IecLZ/B C^&ґ:>-{78rWs$X#&,%%Y+Sa$JP(.z9BB,E&9ގUhL'y$|=?8=u=ZA}͆h}ՙFU0eNyСQbE2AcDu|4 n8Y[""*:H1iGY oّ1 K{WYh ^VϹO9&[Ŭu)TuZ\+qv*ؔMy}9.,Xvv`<دwP]U aR$XdKRJIOJ^MU%6/Όrجv3A4rTRkeB3J!CJa#BS;xVdEePT=K6FUxX0gֆ9׍8YoW=3[GQ+w@.K47(ic- ̬<L>4:_6EUj4&sPo{K5Edrtl2>B9a;^$JJIX:j5s'qҁT?gPb"ʈbz RpPL$*TE!#pAm5Ӊ.[{,>)řW! Xy6Jz-Cu+kr옑]Fe<]%≞ N_|\Newo'\X_FsO.'{7zKo8iQRâ77#SRF'|Os'}:{[lvH$tbW; t!I_d2nA 'HJArR(]t F 9D&1H].g5ErNL!ȵ1%rER1&xҞL$E 8$ Cjhl~"e4ì,($iI#bEht@%[g=XB2XRkC )A6HPjӖ72c&)m4"HL PAMQ s9Sߓ2l5n-E:(Yt >}7g胃r|>'gK^6 8*o@oR&T=/IB@K4`0y'UpH@47EFTB"b# !RKihZ#c3qFvJ3,lb!5BpX>e._ c}ŋ'dzg8+"t$;)v2)EG% FY JH }J۝Yxj:UMm],$(±@* (saVΏS8Dl"lrDxM YUv&2;0ɦJ ʲ](K(Df; \<-X'umQ*΄bAJ#G pb=A3:0.κq )LKEՈ#.J'^e=,<gAPJY BeS826qq8ΒVڱ-BcN "_RTn|Vt7i|h\@e^#J$( ;HF-d&"Q% ] #C5j zyONpmpb;zZ񴤬=hg_$ w'@9E *nV 5H_%4pᵰ/ NbPJHԅpI vEᤆPxu(IEd/I)JT[(mI! %DNV֩V<\HZt=-k刺Jctej[ hHːtsEtbFuO* \rniw m%: kM:OZ;r[Upo.iÛ4}?.K*PA'kAcEK );Ud=,1 NEУӪS_ Rbh{qv!_mKhZq"]\dDZx 6J%8Pī{mKɀ6S =I\A6]ظ>v a8 (6 ݡ0Ui Jiad{Ltי+ \Uqg#biIUR^ \iJX\C+እ4΍p U1B \Uq:Gzpe!R0ȃ*9`JJzW$weIi:/,Q NእD^ \9@Wʵ_'7ӣvB];S0oM?_ģy“x餬QmOuM{?E)Pѫd)54fo ?<1'Ӽk̝1,D.Nʫ'=6lWsz0 [%?}I4V{>KZl/,B;}ˏ߿{ECʪ-RG@7T'M2'$oaY6%C%hﲄA'Tf"\'aVP/A%vTMnu5%+$P$W`"*;*-GrKj1 p` Wia݆fL<=rN{-{G @'3\=NZܓw8)mE#ʍpKPZ VpUs(pUE3tٍW/ɻRRJEWU\+C+R);Ջ+Et嬜iW=҂wxrg޵u#b0̉x)y$؝},EV"K^b֒RŎF`[stWW],r;obSt.]_ gWKEtD!N'X'5J5;=9:j+ȹO+i{&XGxTrws^?_uI8=W+߼,@[9nqI8nk}iZכ_^`^qj|wsO=iV*UުMKc-PbFLTzPM'͘Fj%1DS?ۍɓl>?97Y.C~|KKoOiÿS?N'",^]P/ |7?v>wu M?F//77RE,x,.8N]Α{A 9C9h wF!͘z|بzQk5[d_75o\tNE\HLlh\JelgKz~uM#CN:!a>:ѽ gC+k:k]_#9:z54na+nR_"1z7շޤN뼮ɠz%OP@e{"5 4i >;sרsێ rM?`B-;F+YVaɇx<Ҧ~q9g%qtX.!~4MMlN(챹-jecd FRRʥXvdD{v\Ba21-T/8Z:Խ)BwĠK!wɃ"2y"\FDjŅH`Q9uN3꾝;xOYCQLp(z"K6αJ46JQ`[HIJ}˥~% [JVOJxÀu7fCёV4ݭA)>jo9A4& l\UUM)m"2ZXU.6v#/Fzv TjӸ[rV x˘I9bil$DED.'W!w1,B*.&\vꝍ:QmP15lp6Գ^ {U/ϥIjPCiX{ENS%'ĦBW j@k"HqMq{0g5JfF\@*5R %uRb3 1/ZàV=;I8酔6h3 4HUoR/@ 5giY'&gTaio8Ym ]vFibQR( DXwgR-u{R`^eDtlNq҅^EwV"`Fb9BEd1ֳN3J-4 cz\44JBՎ ^~lz þSwzW0w0' =^+^A碲K Cz v8rAz@u4oq\)a"YqVVn]Pwg\Cv[CgTCP?`9tBH-唨PeȈ){$ɾ^ rI±e>T>ߛrQ@"@T,*\xqÆpS-tBhWBRJ=5ٷ奩Q?>Xz7[/[F^ƚpһ4"/Pvc>hA*s𱇘1!:xMFu[X4uG҆tCg_F?pѤ/vYL a50eSg;4 [ˁ>HQ$;[|2>Hn֚Pm JB1" Vj1Ph@ V0=Zb 3-@9mL)IAlȓtɞQucw9}|9S ıy I[i93des*YZL r@ъLF]l[PGK:XQ g7=N5pӒz>ydʼ[ɕy{|u[˷>Q]elӋEr?G'KѾaw\^̭)c-[-E@eC)4ld]I]j#&aRL,:5=bŐXrqJJ8ڠJFje6-82mmal qo wj FCGk_\<\ggG~DGG'.ξp-!E\Ƅ hR g@؊͊ ZV3# Y̢i3]7>D2'`ARp\j@g Ii%WB!w[.3L.l\p݇0t>0ruD;=ɴ2e?Ase*& :ǥ=y[fv|;ᠬ5 ԚvD|cqtAF,JD*5"`M )H *K]HRk9zkn LB}h 9{+p6UQQ@6FshLi9IG) #r~oQ]P{.A)j#Ēx =r4 N3q"CSZ-D^|WK^o2T BFz<TGaV8S7בF&g9]tܘ'ō9[ٴ_kϳ;7^]L^;(,9w4;-]3imu~}k\DHBni8q8pT'\ ׋Vc-yrlu1ɦQJ5Te:,x$,qc4 k (Y\#ZfVj촆Lu?]|x5\/BJk;p6?d̩a #JO \|V_ uQS]2U;"g$7߾oo^/ߟ)e_2EI &53wh&[ ͍chNۜ|W9qqk\FZ7Zv_@TK9 / A 4y6|@~GتD#uw_O IpX. xQ#񈱕9? )yAw:S¥!<[ I6:FIR I:"v 0k2;VT0L6}z?,s7;tzskفgvmj){$3ZR H"z}rLZӮS@R0${JsRJ%"r:R އ6jIeb⬴Z2Ü\CvSG6[&>t|I;%q~ */,ARQiϤ Y)oޝljMvx8t %uhst6Ny&FI7 xwOU LJ1ʳK/=ƒ$۫@VF?VZOXkPD0PSduS4"VV q.]A$IJ6 HJ &t hqg!Ag>iE]8S` ЭLlCeH!/) ٸpkJ+Ia<^!;>ʎ";,0$W הRa1%s#(9J**r!GK䴑(lBÓx<_Ή;z%6/>h L@b.g6kzBz:7E Et@Kdv&J udQ%ZZ.Ke;RuBgMO&0>&hL95rvsFuH|Ǔh"^,}ٕ)vUݬSJ6f<$*X+51%PK(/}9um9v!hGh}(3)H@ȃpDY@eyPy*D⌨<16;5dboBr" Hn"1Z(O;+Ҟqgk䴑V,EHu}]WyKF*$<~:TH ppLHzKL*ە74|)ކaa4>)\/!<QOUtQMIXP9ir{?·SFW/&.© ]#K~d׹E!|tyQyM_UpR̗e^e^a \'lʱf]! KAg', -BrH2_ gH7_楪/rdbPVoakdzQ9~=󠺌]Lo ;EXQ: : tr]4pKnM5ݯkKШZpo9^WS8rew&-`Z=^c%ۛȭUZq/Z=&%3\5Ɋb:YlfsM"x3EP);/܋{qkOꮍFy1aU?#"ye&c$i=^e-0iJ9+E)&¸[b\yv9 y=xpoe/Z3ezfzSg5Nw}.ݕΫ 䢙`r"䢙 O.K.& B7\o9{IR(K'tįܠg*jn<db+jY >s  w"15ãg\QX6FI$و ) DsD䲜N[QѷuI.'F(ɢ"mGGEnzLG~>ВK/"w%d8\9(E'ڀ \KMDT%`IHW,b1 <.Pk61wxq|BfG6b!ByKm@vIT6>Al;5 /!ftr.Y%vj=%C-i1#1M%;PPqxXuAXPuoǡf+KZqQ.Q2="J GaDI)N%)%b.#> a ~,vCEr'K:ve=~f_ z=fjOּ.Ϭy eOUonZ'uUVɑ\Y='f_ J ,pf^\ ${Y!>+D||GZ}6֩.k-mHm,K0yX  FήpP#RM`KzF=Nf0e^~v,֑u<u2Sܤ([k]*w`(OKܡǰrt~6m| q+iLV[]EsIJ)~2Xu'`X2.Zu_[^]ξ |c D=.ﻨB_xq^ SٱbbhJ'lR*xqq~XL*Lӎvx$z~v*o&E:9Yq&pU*$.PFE˵s| C6IVl{VlYoA+^\7NǸp:G.Mi@I4DC DdtA V jr 7 RP5j /H1Z2 vLe++X."+RوS8􍐰L*ac:+F4YGbT2xpԙgXo7/߿"'[~*(p|M̀}ح,gmO2Wlg )+\):HsG_}%jonr4DȾ,>MСӾWn/Ǣ?R)뎋rJWnki$nT^i?@߽2hTBR*_,%?:i 6ಉHfCj-z@*EF"Jf+kD."  WRqzpe[NqIǾe\8-j{\uSi]#5=Jd+W$E韅+Ra"V:A\ gW좾bcgr rK|.뫸xkO1>h`<#6`a3}zOlF L*Sd]N3_ |f(W* P|BzI یpEW$\pEj:P| J)mW(ز|++l."FWo YoW胗E?U Vw pOOO}n#vՙ8:%Z /p^@b,Y:o|3ߤ:du>voiX|NtV -*}a߮|Jk[Th.J|ڨc.iQn;3iL ltIX80Wu&<`QeirogЦ1?O5&]P5խ"΄[Es|'*Ù̚7qݩ~1*iŷUaAɸ;n{H];m+'ܸlM ߶Vo[)Oi9g8?LC*]Ӷ:M[~Nb9->S&S,>|gA&}o| nH_(O⫆~Vy^$UT+*D@&٢T__wb5VUO'6 zy]ԻyCjguK=Q+6̙b IJ{3YzzMNU~nwY<Ήd;Vy_?c5Z]h! 0l1V7ziCm=mۡ͠l6%C`UL}uysw}B;*]Q煸k_ү]l6/F~|6k_w |x X}4YL(۔WO(_k'QƜHDFf$Ffz{s#z~rn?SW><|5ی9m yEcݙb1)X;^p7a S휖JۙC5wY2ZpϹE+X6gIC_FR;OrX[%́tzi^^/*֜ˍH#YJ[-(<}yιr}akj:{~u|tكpM[4vS5*9|r\f4V6@kќΌi'W(X H& `+TiiG\+n ًU7ZW#_HM2tQ)2r#5=78`òU-+RqU#NWh6 ): '"ÏP`f J+!'\`2\\!r=qY  ;ɕDWVTqqLf+ LIN+Rka"nNW*=}WnrWʡT:I\D{O 7gQjŴ;u⾸o7_Щܰ(Gd_Ki߫qn1vAE[" RV`(yDl,vŜ9$X糈LrM6qvq?ɸZ'W$*CrW:H#NW TFB/Pa<\ZL*׫`L/ؔ9eяtűo&W~M-^n*ֽ䂵W;kW( HԂ:Hqu W(ذ|pEr]6 W!Ĉ0/:?W$lpErm6BT,bZp0J H)Utt.tzȷt^+Ѡ|."h)Vrct2ɑ`kZn639RN^Nq&W$i WVqE*qjp[OnvLknrq)tSˏia}MS&'\ࣟO*+R{UW'+!$6#\`.E6"T9q%2Re+|:5LIUCt#NW :}R[ HsuS׮jdq32 ]\r=Tr=q%.+JU H푮R:E\)mV  ]\rG\Q#NW D#ސFvgV(:[[z 14}RJtN&n "UR+7JPcpq)WN:HfUJ7t\JF\\SupM:-#tS}'+1j_sJpEW$ט\pEj:Pf0q%@iH ( W(װl+R{pTJ1q%pj HF>WrNW d,#\`iɅl&VqE*qu z yWʪKꙷg31hSXz͵_'??v YϽ t?ͱjzSYNJ{o  ]-gJeQ??c~V,Wst<},󕺺(ׇ - ;vw},:G;ddUSU_޾};J ۾)F1rzޣYva:`?76M:'X&#/1vZLJro77Q_"/u@_|B%/V6x)Ut%+/AJ[u^dWHu$5j}wӧyjeLlע<0xdҠn0t^Iԥ) \9GҢaʷ4Nndѕcrd>2PB_Q(E%ȡ2,j- mJH5J#T\81W R Xg bZp3aA֤SBrRD+24 FtⲊWw;UjS S-6g]9\]\\bD˗JK!R %s%KT2Tcs'ѵ$/=%!jX 9z3 a2%Ft-,YW{M4~ Ox4Gg{N!<thJ/#7XwP~`Do /xYVh*{~*?!(CBCcRǪ 2 i*04`6U2b O:C( ?)-g>̬>pĚVAsv%zytWږ%ĿcvS^a@OQ#X¡$xڬβJk\x O:}%}08*l]trxoS[}uZңA.gJtj E" Ɨ(ؖRc.t ѱ;b^ CԇNx'ռ4ΰz/)F/bǝb.,g8zO ib%q!apW11C+'w $eb80 h* \ˆ'-hS1N˴;Jc#QP0C_Uԑ &/%`=6ICK,v:dqa@rWiu4 Y2XH:!eHttN~}Vh]koG+nZ@@$_f ն6)O9(:YʦL[]uֹVשuYг0*T*J5 eo L*X\?%`h!\"U0VjVa s+(\=|xJ +Q5 7!uTWPN+$PcYTB29VinB AvEm%˲fH57]\?7(c)(|k(l٧!$( ""*f"n43%!Qeԭ9+ƒ=su n!6K17)_JJ ufM0*Qkq)y 6"38J!\+B4ަ2ݙPHq Ls$e七`Q 9iAQ 2DBY'_& JMu2R".5=^d%їTycFӘyL)jkIk Ah8(:%Y,  ^ B9TECk>84鰐A#aPir3RTil4ICQŘ@QE!iDVFm"_ 0bӀ:/ eqZ끖5<ܩQZ01c]` m3L[oT^:OБV%SdIWcHV*U2P (yCs 0XڭJS ) >@E&rZ5ȼ`|ڄLkt0X1FZi(^Bd؀C`?]Cw64IxTF+ j@eV o9Z6RS[oU4^" n$BG 4/%e SLЍ![[&'@MtVkA6V YW5-*mP z0XAI'X©rltEiaF3 _ 2)/$6O34%0U2'ʓ֞ +JOQ!,iJ ]\1Fn5 0XoC1jEh6˥!DL0r( ;fAjҪ\ >$Ct:L](:H_PFU~B:CwW4"坩y0B.8! R Gۭ^ ոߊv+foSh@aSQ6o9{O?}ߠH&<@(r%]zo/3JٲkhvvI{˯_f&]G? ]//vco]c>~ZJ뷫R ?r!5.oJ+˽kח#._6[77WKhz.-.M[ֳo $}'!z" (\ɹ OE Z Ap͟EɟaC(db> }@b> }@b> }@b> }@b> }@b>  |@H#~|@ZO{!>3*> }@b> }@b> }@b> }@b> }@b> }@bй|>;ysӍh;y>sAD> }@b> }@b> }@b> }@b> }@b> }@bй"wO> ֡~|@@}@@=dEְ}@b> }@b> }@b> }@b> }@b> }@bt.>?nY~=銶R7|.v,o֏gE8nlKŶDhumPZ~|Yؖ~7(mT/@5"NM7LWgHW^-EGtEm?$"qp4{e:#&=r]r mϳ{xBsmaAg:}BSgέ =f?7zъ\`pncf̭w.hSۏo Fܟp\l*6>[n%-\aǛz.\l00-0Aoݯ]ݭjZ_V~VK;hojZB,v&Kv fcbl^_ܫjU xFUb8s B9f[tRtGka'M.O/X%aTM F:DPBPɶHdo")Le;6CI38d#n[} 5HJ<_v*@M#>bZ膘8!:(g =tTEpU7{Zm^J;J.`CO?]\-{+BkP݊fꝮ'JLS;zb:SPNM]#J3]=v"ZGtEc;u"QBWtpS+B)=lGtJnj ]/}t 銎`T7tEp酮P2]%]:?0vDWD?tEpꅮ S+BiҕEr=e?tEpU7kWvK턒ϒ6}G)+ϝABPtut_>]ёHU_ڠd:KV8]ßr3IGkȪh/l{s@ oWëq1g-R䳌&2@;wI0RMy U-G:LϮ=m( Fi WuLhBˆ3 t0~cMB/tEh:]ȷqͅ0ǧWWGqpzCqhPZ7-2GЕazK}+kE/tEhPtut0tDWl]r\z+BkP:tut?<~t"Z)PjtuteRuDWCW^І8uJ'ҕɪ#"1vCWk~4 9p R;ABiy[0=EϤk|XJxxvhSO4 ^nhpꆦ6ɯygD[ gI.5?NvI)'%mcn.9C7@;*zYG<=(ӉtzhW8T І/<XDc~`ۏ: ]0z2CW^/|C䛦и'ڒ~J+&v =,c^z#舮X~KʩZ ]ZNWrjЕ]"ZBWKKϧ+B%ҕQx]Tպ"O8P:s+C{lLGtE膮F]ZNW22]#]٠G+,atϒ\XwT \c{+B'vE(y,[?]ßĹNF}rzhn7O-GqAnÚs0xτJSm@C W՛E~~aT( ox/x/_~o7\^& )zPJYEo+_iDEJ_/s#߼~uX|sևa׏7m%;/&r\@ݤXm^\iqsh]r?\_}\Lp;붃7Bm {e尜?uҬQTMF8Uͣ6%Ӛ"n:|;o@4MzOY/Ӫ {,owB ¿^'e\\ z/SkӾjqF)U@^*E5káq?ubC6}.kK- nwC)Ϯ7WWH/~߶-7o7izwYBrY$rڵ򃞸{{{S_A*ݥC榣 Zm U >[E>U"i'!}J#MBVCLQ!ar6ZC 1;9c'fӚ,'2cR@RMB -Fk$Ԙ5ERJdQ&)O/1Y[YkZ^.k_BXlw5TPitQEC!yb'{Wƍʔ꾭GKhʵ\mbnn\xxH-IY^_c"%ECib[9t?x:RC`BY+ EOZWn0MfO4&r|0rroEKcZ8N϶ˮGk0-ZnW>03ROqm j[UC3W }MXY5r2IZmK*K4x*zh6],}),8>JSEgŔ s? _s6-͆hiM1{QE8KԲE4:ͪn86-ͪ&y@yeya-WnN?O̡5LЂ^X,q`;Ͻ zEʵZг\:¹ g5='MwD'oLPm͘S5T\;05"<`1I|ń}ey Cm#g/.& TT1T>#9 ʰsR˨N€deIn3V "9cd:0t.3۲7ٚ8[Wp4qnr2eUQ k:fCĜc̣hF{l/_^t%ӚNt-#|=n8&Lj^&F`>. c6cKWm+r6J/.CQ+Zz7IK7QY'gw46q4%0{.#[=cd2ox Qrfz aD˹ <8m$\sm8+셱Vi~d|/-?6IqT1mm2ɒ!YDsfS1L=CSQg˼gdC"ݳངb K$h3J"߾\LFgڙcI#ЖqR0AXdޖI4o3̘A}˖5qƮ ̬Qw%2E@@pJa"!wyt0Y80ʢ@, i__/z ~aqЖXC iYb5yGBG"I"{'ǞjFlIt ωECO`BYi ~: K%` `mNVpH]GZӉ_.[Q6Ӵu6ԅsfJp'<9Q9YP )3zkk/"B]NkM="my$&W+Z6cv>iLxƸR ZT"ǏЧ7cE0Ԗkeo1z9-kp?_"kGiDI>Y&$&/l"d\i̔mݜo4L4&1MNMU9wdd+Lw2ܛS4`c2Dg f]rL u8gbc촬rEg}:eܧS竴J4 )52ep'Ib"Ι+2qVl#s/71M|GVCߏgi0}M_T]Q5re:9!'Ww7 |l~}FW{hF;lWnv[M,K(jtOSt5uq`o#o%1wy}DZ?\gՓV*:ՋI3 šR'/C[qD٫ځX沔z9ԅP+PPr )eƸN HJ)Лle%S XoLD M{Jq!rARe{ΐ5 ޤ`{Cц,Zb&QLBBwhU~{[.'G2e=|&i:tu ֹ,9S`<:13 6LDXZ9٘sM&%VY Cܦ=pYN;rT2b&s xg뒚2lM-y;DC?1]xnǧd\b dmtp+k$HQ*! jd*̱mCMj5Y@. dwb",!!D$OͫH*в&I.˱h)g?t'|JrS>h66AIo[̝tr&1.NRϳqTbьqQ4-;R8{T2E~zō3\xV( p"Fc޳Tc\zˮ5qh Fy*aak/ږ{,|V,\IT6,xnf4̆͝?/Pp|5Lg2'a rIje\X@`)؈.resK(3P7gEBRkFh2v m`XC,l{-qGl?5+ݚvڂڂ3O̱sJuØٞyrGY6Z-e4Xn`J|:1CE aML,hMđ0>p>fY :y4T+ؚ~싈eD="O"x'QRdܞ&#G62d"G/]B쳵ƶUfm,3>XtH@#ɝPPZg.Nqu)ٚ싋e\=.6o>ҤF Ƙ$(;]r-|6tρ9-e< luԆt̑[eNuQ4َ[$ُ/gKx'Xg)|I>Y-Zk06Zr.Z0k]A>U퟾H~u:ԉ{$"灁hM7)iom B&*MLyNvvRoPW]9N:ym]|d3bZf5 ~LgLq.@o q6B ,(Ѭ19Yb;{wBgY3^("Qf2Nd. d+Qf@Xm'섍KңCωE -w`>}`8m`)jU%.wi/i=ߝd8|Ioz%GyT8}8Nx^_R⫋e?4 i55$H_ QR/_ob@W:O&./c/Iz??Ӥ.WORܨyddI|Kib"[*v·Y09?fLrλKO!1ŒɆ73q=V#L|`KD XcP̀xJKwR)9UjuT s]ꋡ#6N}ݻ"H-ޭʼZuV)(>ZȻZ4 ZeQU; ynUIݤԲ[JtϫzAh H`) R \J`ኤԜpJCvQKH\PPUvHJzzpLkH``ઈ+UVUR \)fJ1 \P%u*RZ+ "qAC+VI:\}m•!9E`<*ڃqIZ;2XWʰPnn\r}>~XۈlBp0]l4S%OZj2͋F! Us^Z;~cˬa-zvkF.gaP֨{5uV{ }/x9Ǥ ]yMWWMi\CҠ @=\ ѓڥՋց\2x YR>Z.W9Eޡa?iIi;KE9f{zʜ@eU/ ,% ,6z.b\{UjhY$Qd.u@Nr]z 5ǵgG-Gv4t?me;!Y7 }'/PੈP<9>'W\[=Y"fWE\s0q"|ੑ_{=pezs̄4J8\&}ŗ>h7i ݤۙlv+վCυCWE\u0pU5ٻ޶r$Wl2͢]!\a.wB ]))d JIʞ)JK%% [E.+D w(/{WC+]*BtQ+ֱO%/'C+䔶wp5BVNW !,3뛤9?5LK0!P_do~7d)W ^\I{!Rhl]u];y]OzMXcFΉZm/_eP$am8n;GJ6ZY2޴ u*˜PX-j13_5Zm9ƞw? ov4z}Q->4PտCu2ﺑbѺ-~UOϫǿ|lC{F3_TDUi3ᕂ2M3'PϪm3x~(+oL/+0=Pa'@XigA~Fbӣ} ڀF7Bw<;W9G&LHDW;`t˵K֠xWڶó/7_ixf_ 07V"lGc"uqGrV2QX@mkp/;oK J{oXs,ssRX=XlNøiN‚_񕜒f]ʻ~{Gi^yǾwcwQrSidG㍔Ǜh|~fT'.XL~yВpMi5d> DrC2@<` BIC++Dٿ`RB:t?OEWW'G6}rD Ż~EM(Ƞa`wJoJCi({YЕK+W걪Ќ]yK2tpP+D Qj(t5@. F#]yV Q*Ujt%+XHAaT +3 +JK+X) CW *tjr+D)J08D2 :C ޻B ] :'X)*th%ϝ慮HW^}v:쐩)9 6Nrw"N;$(`&2vrN*߀(MD4w vtveK*th̝eI~zɦL+>isDng^ih'2̫NxZ}cTϵ7>M N4At(9 ] a' ѕ#CWWs*thΝ-t5DFqѕ{OEW8*tJr+D]ҕ2VJ6t+ ]ye]!JQHW+ @<\{WRBW+k :W ¥wh-ϝBS!]jte8_[;rf>Zy _;Ino7-'$M.G;+0kJ=-ifHSP? .<)7!ErX'GDrvOW'U"Fr-(#`d :2tZ% atI k:2*thNWWY:]H)È4Is}r"\ 4vOISi(]f O+^걪砝 =3 uT Ѻ銃aL ] /id ERԶJ:!3% btA+Ve]!JS!ҕ w+$BP+Dkxt(-/t5@m K#CW2$DWa ]!\N&DNWRAҕdAX'S-P=X"7ʥA$8ڐ@*KI Z ]`F!# Z!r,C3Q sd IDVNWLDŔ1޻|'w?4ﹼ$ ܓw2R$Е(tXs?f ѕ̅%CWW *thNWJp 'DW֚ ]!\ЕGkɝBW+\+ZGFIB NW2BWBWJ*-!򀍠]!\e;]!J ] &~Riޕk`J;]!J ] v,U/w_߼VWMW!3l|r-g['^D٬[*L x䋶;ikA8O#=<Oߵ7Qw1?g<,`0 fn#T Ҵ]uy4>N>LJLn6:?Wo ^ۍي+K{[_do//b~%}~r~MAϫޏ=N`ыe?~;o4_,'Ӂ^U観eY3MFB++v;~88? oȂ ϫ|ryx;>̫fKT)|KTs=+Y{aViu|lVMvһmSּ=yN2It&Y^<>Pl2 V-LA3g1'eOhp)kRRn6?YV-Og)Ejs+x!'MS0O(O`'rAc2R.95bjO|񗟎p' EXgkamkU+IVn6d-J2 3X!*W|eml%Ӷ-4mVmd& jT;5]۱1aޓjkMXgf`vU_D [ -tFw 9$" RM+9J#΃ ho{GL,9&[cp9/OX|ط~o ɣ A{bmfl3Oe;xC*/۫`-Od*? ;O 91FQdtSiF@J#7iAWɍ2-ыfM7?^g/#~/9m:++9a"$dp$ *t$jS)VIJ6۬ E#.rq+V.NZ2gU,  n!&WXsrc/*V6^)(4z֏<w1_ycz1_qWujuˍ"'ɥ:ZJSdb['Vy*AG:IJD2&NҲ3e* "\m<¥RåijUmF%ၤ7d;s L,DY,)f%sj:QRhF#*Id_;g,4Rj#ӄ镠8D֬S\l@ƟI3ٝ)>Y̘dڕ`LMp@dUY/o5LayIq27Ȓ\Č$4e,P2d.;=4%۲J&,S PA'ʕy:LTOU+ώɝW ԣ3V-?|.xVLk>tUm=/֓v/ '`u*Ŭ #|58|ꟼ\m+b@B':pT^?^nYq3аbIoJYZX5kUն8k+Z;`Bϸw`agf Lxz޴6C{[U;9Fj#Ǜ~;[KiЦ?ϚN9j%*jXW{=ԵIxGwQ~*Dwow-;]^̉*9mWk]D{AYѸ»cO:0槲Exk G|zj|g;Ur?/''Egq~0q2E+) $%;Qx3)uK2~C>ȯ۷C%9>cU`mAg"7}]rc7h eUSQYS5뽁Yr>k-+'ٷG:{ʔ][oۺ+F }س[ 0{`ez))qĎ!8LQ@_"r[\\w *?dȢS]/% EZҖŽN2u8V~*PV\aGseRXXZ;k#mtⱟH-*^uafaܽY.Y uN./20l4g SBre$B5$éZU BI\P87" ~-CEsFK&J!"mN)D|-EIt}ܢkM^G7ЫŽb@MbY97!Z@F E,"4>R*сv6sb&)j6Ҳ޴>>ێ_]:ĺ L xt'_ x(h,_+84N4J$IZ"/54pŸ2$lcj-B#X%Zg\_8QdY,.Fe7Ǵ{?~\Qo5Uu{1B Us?A%l5L,*.0s3Bx;< vR~`Ye?׿^/ٸ@mk {N89 8yr#X'l@%1Ѿk5T&aıVCj/z(K4z$R֖ϳ*zսTIU!nm߳~]_ {Ӌayb?6FJ`-b]U:gW%c:g$* c;Ԇ< ~K8شgf-tű @o:l1밬UY=KQq2^xY\P)Y=PyϬ_x#ŲADY>rZdR.9-#/Lg.2-HGt[OOf|>M05WEb|:]\2h1?]JLR]aw!"⿁m#YD?!SPcPN.t H. X X(܋ڜg^.RNVJP SC`=b|IrK\̯Xp=[f Rio Jl/oU;y~`<1sSjd >rlxV͛:W|l~;^:sޑhôX2n+b)K˪_3V.?F/~},!kY20v4*{ MS =/~/2z7ښjٳvR.WkeUk{<_Gf21y;/tkDBwM64lqmSZ<|"D@V/&kX@ڽ8_=s6yʃXma||mոBCshq┾ 6pY9-Qdd|jAhH!0T4¾NfcX `'k89' W|1"vJ";.J ~䓺= `>c _vRDjjgIΰQvCInb7F}٘\R!˂%dOQJ飔,Ek-HΨ)hd<ׄOY S%LraR[Z[ZXZE0$8a06HCe` 1XK n8hk{irƂVQ(\ ʵJYOE g/05M)8g4\$g@ -0O iFN$uz*tf*Fhhh.y e1jISA Avљt RAIx҉jay *]오5 $ŽexD"e(v0tΠȩ!l!vm9mr&, +Aq. IqIChf&$`ml4]s: Lf:wq}JA-E ;jfELB *~Z9n;o~ZMj*3h8Dǩ'՛8窳.D:SŢ/q򐎗S2[+Ev6 ֠JA7[ EZp^?0&YB5ScCI?`}EAƼq$[Tie| /5@aTrQNMIB1k sӠd%vPBzsWe;ˈߞ_K*JQ_pfi)@Zj8I.S$4*^IU|FYv0"Ex{HmyAJ/EiJkn 2_?{BcEΒ0qLE?ta מ1~qd0x5{ǞRv")ZP( $ed;Qh'>'Xlj|ط[f! Aΐk@kU )؉ ?y_ynoEPM5T8{~#42^ȜKW[*)63)k3hM7hWF]/6 G){R'ߊv8ji11qHL[4KfIѢa /R{8d-<|@z|,=cANp(%zQ tۏ?wӎPҲX4_cxoq;.7B#UqU55a2Nw8-8R5ImRPǼs A5:u^0y^HC r);{0NgqF!$POA2'W;^0W dۈ,~,,1vE'"swQ Br1QsiASUkVCLC7:W1xkBt3TPs6fy `Q\~1k*h9䰍q876#Nl K H7s%h^K}/$$-q.6ABso_^hҮK ѓN" CzT}͕$o&I)~D (2P_UO$jcw7Q(<#!R2dPSW6DKN͔v ֙P6c{O0*8릱B7a Lw9L*=/g)1 6by%±M5-{̳b9qp׿THR\8A OhgՓItmCޑEOdΡN"Q$Q8nʼnXG u cZ \_ӅQF̵ Y\^6jV{j]UDNY<!%AH[?.L^͛o U_`rm{O*_]oPSNXDmY[L9QJ1|'N_xfx/ !0r^o|ȣn[~_N.+/"Ğ)=2un g~.BǃO9ˋ˨&<([ ~Ӳw$$GfuTD4~EX@1'')v5l5ہ1Ӧ<ڱ6{0wqA~EI6=׊S0ccMe4agz3o&Ǥ|rXz~/]Tta8CN dsX [Hkϩf^CȄ }Fbٻ_]` 搄`3Yq*D j3.&֎G#F)Z(rj;Dbn}\( Vpr`t HN,ʿY [q $Ʊ})- r;jAƐ|x(pܔU@H:YVIs"bDhmp`)w1y޳gWP^њ_ksFO~WP/<빲frAjȞno$XpygteH# ͓IEW hcG/3w8pq ,wbuG o#jS*wlN/dᲄuXEwTd>HbMl3r2 |YE$Ƕ:\pzЎ5س{Y: ca*.;JV#VRa~Vh~Ou;+1̙&) lf6>\jє9Ky 'g aO0؃,0d۩Tס)]5` Qr0'ŰNO$j籣4V1[N ^"ZOZ[EwI;9-`4aVymw*U/EU|'aQ) 18pW796FN=[|\6'dm?0{^mZ^ξwP'Iq쨌׈HfҷnQ‘.}ZC +oMŏFH4ٷ5 C"whMO+z4lɵG5}bžY]Sգ ݓ7n^/wc8Iu=Z\ D@N e2j1F5yh ՟bu.-\)'$⅛]yYNn(; q C B$έP n0O6,ʺK<܀35/VY*_22osb`1N珞f[fn. Ui J\-n*%>HB;*tPNa( Xf7RVMV}R-J/8N9C 7'v^8هgnhl>Xo"֍&m7G2RR%ҘTP%8lc Ahl?f)`~' ;~0M2,u՜uE g Τxz^vJ} vKݳ7zӻi6<|?>`]I}X/Ҭ$օBd1!][ }}njQ`& nw=Y"+s\>堳ì !' X@{9#|Y*C_5 2ERS/4gOCPב,̝ =/PxҾj ̸Mw̌0y2}s]d Bݎvϖ?W4Eb"r5]d);"uL9vg>2+8$|dջ0JDƃ  R"{C)mdfuh, JL:csDxЀ¦Zw:~ל)xUabX݆Q%F S&6̇( (b841vWTdLa7wq YݤkǾ9 9zPyJS/ oLbf*8 cȜ dgPdPZ^' I߾,HBN4t~hx*S <>ϽBeѓpO3ǜ@+*S2J5U2Yhn6uخFf_F_VpZ0GA( #zJрcYrTPsL9QQ Iw 6zceKB`pv~ضfJrE!)hϠYGց8zk b6_F_ eR"T@2G?UaQ}DD!E&ąs~[{ 0F8pC>{ sC5n6$yPĐ/kB4tA _BVXIY3ϥWzM~O7x7E #9ﲚ{2ѝrc9 [B|pkL (3si {U/ȋ*[75)B ^?KAR®WF7=.\REU&R0q[)BD4jc,>Ьv4S)2Y sAQ 1(= )ޛKEX0\r:KQݟ$ş)e1|θl:Mlat[F V&gk l[}a4MqEt}&M]:6pە/||b4{5[K *zgbGlgݷ470c)'%g҃@ NIscI%C:]aPHE"ݮj?q;T>Ni#՟OՙeoפO%TZ,;2^޶A4^:򩶲|,j+'ʲ~.S ȊIΰ33Ci ,2fÌh$H'H<-(Y1mu>|Ze/A}&fˁ@UIe1Q})'fYEk៪嘃)}VOa=Z!$D4LxTx8 ,xC[ШRfpJ!(Ҁ@Zprd 6a129 PSSH*aK.@x&tUpqD䵭BcΘ2ekEf=9Oz:O'3frүoRy9,]&0[qƺ9s*eڄx c mh.hZ4Xo3=l#Q>0q)o`jO%墨- IN1'5@\)$×o`ja2IZ&zHwxj`~_%:d#X$h5@ Si]$_|X1uBesu$%<V9Gki8<3}JF^ ׺l<ӿ.˴ y¯:1#ST0 0fynZ[~Si 3qNbvmU/9â)ew*+'N)'w["21h* #͘[\01a zgKяcĮEA$TmhxsGƸ+DRr;Оc fYgu{V8ybT  /A0RFpP5eӢrB~0l@qh1~<Ϥ[hO\K.6+%ɓXP^tmRIa4@%< %LE"/* 2,+é'>}2" + ;7'~,cd6'uuDI8;7T+εݪ5lvRu(Z,CBb Bɒ P\ d$YX)O1\XB6: t}B]^`quG6,/Tk}Vo,{O:]պr$L!ܡgqi[U&0W>U_k.qV.S g0ƝǺ亥 y߁Teoop3N,B@JN tb1>%m]~UPx6EZj-B0^]kb}$Pu2A?y\}ިWedF_27?uFY9GrNVmX1X$B!Cf@4'4W8ǯ aF4^(ş0*)U Fi ҬF%,=:è}=0 Ԣ0cgQf'QS\d\'Y͚C^`͖|l m>N=ƒgGA$ϸl@3A]Ys䶵+y2U${RH4Mݔ43{jI``tlg6|'TIX͍R{۵`5Mwx5>>K56EpR0bm\bCQJ+z kCyWo05sߥ}-Ɔb0C>+n1 'KSbE26Iu/#v/o&h'bnS5381DT'd,٦[k=.&C}M#1N\iǃf8'9 v6`=ĸhb#0sXpj7O&_efoA ͞a;[LP$Fe`i8QjX$Qq5`&$eJM=h.c,ģ1'kHqD(Npxb)dpI /m7̮$i3>k 6B&@!gE bd%3) =~RGwve!U_9EIaGǯc׽M10w2b*=^'V6wb m=jr8- 8 >om$C 0U 5W8BFÛtLQyPy@?rcTwd&39 d uz)MhxQfD.U( ƖŝkByU!'q)Fhr>e4M.8d>Ϣ7TN7{)LV Rx~(N]J,ưG&N]ޑ>2.]xz*Q'RJΐMS͑a\ -)fl~[k_̊ƫ+`s.XN] TX 'PխCF lv=Ma]QƌP /Q%6(Eu"A~ ƅf-yh-Q&)ПF <$ctx(ʦتmJFx+&m9q)xdƚ|Y-Q -Xi)\O.NZ 5Cpx%xp0hݡ_ExDb4f >aYi)kD}kY13MtF\:NAE2Se8&%ޭuj.vKB: 5^\u2p<0]Y݋`{h 80r35B) GEDx0Ƒ#riƶ{dU63HcErLE^~OyL>FS 69;<.@' }ǘ'njvUc?ǯ-U)T2j'K-C}g#@t02#^8/ teeovpmOu] WwS.IGڽy֐^@>Feu|f"-/q\5PG]X_ЙOuY0ʑF8l)jgMC2pܥ &yWm6ymˉRvS锊?b,4;K;N-RZiYVvyX؎oUklU[Nnn2}XnoRU|M}/}Px(K~BG;ktg W[(NV&f R4R;#m Y oV26yR0+@O-_H"R1<{KL#(E~8쀞ЂYL;Y^HiJke?W@36?#aB="0n+\638DbQ6 ^|/[u>V]m *bismާ#ϡgV55gxx}SZt+U`ح\),JWŴ򛂺2Xne/#Y͢l\c.ד"Xxyޠ^aZuI ˚Lr8ðt@ә+7)Y/S;1X,˓aնWu'\;ԇj̽TS&M*f\IWч5#B/%OnO={9@'(Ʊ"'cVrN1%gmxV`}& X9w%f)CY& GlY,\a)2vs%59}]V<QϤ%|K. sBJ*fI xKd甈n .P/#++bMqJ@\HZA/P2Mh|z&-ɛ_e$ l%ңe1~:5^e?hU3C-޼$+[v$nqcJ.tRPF$IZΒ˗t/g F8(ZNGlgk!r9[/G.Qǿ&QTl>V5UaJ0[FQl3I,uRT! M".!1ֈp7ARVz;eP@r(NMCh(q/\K%@[Q2 mT [WCjF=:bBҭ)(-[4u:K_%\gR}*^ aHbl>GӤxjwWqF_!z_;sl9Ok$&Fߠ:7,syRKOMyF}e1l->vrYpm^lJb*&* C`NM#Ͽ}{Ed/  NjGhmǾOմ;{nېvkv˞?DOZܧ_=a";k(BD>v UdD(q4n2]HMwIHI% `z40\Wb d@Q2OCXG ̻w >'@'n::Smk(6F% 2@ y`B;SzP*10!  |Msgs`򁦳$ݕc`~ I6PR 2<c7 NS ^hc;5 30\o*_v^ljs\4 nد+^Z_rʾ( ؔ96h'IS!J$, ܬ)!FJd֚DB2=58[Ϧ&T'"c<PC>tZ4T+6|_[ v\g1΋ǚV4\u1[=|Y ot B6ED[0X^* jMK.O0%W(=/kvH~34^.#<2.q\ } 6 R@ʳk"XWd?$Vȸ<^"Ut1i{7Rfb&1.1R ߐ^N *\í o0$(#R3fHJsFQXξO./qw.E;gAōgxT<G\2>5"6 (iX)+R'l}d!n ge>W1%]1,d3*d\!9Ԉ]z%zTs· VWצJY|;ZͰ|5gQ,Ftdq3,SÇa$q :X1O-!%}53*k//F~4y xi"j}F^=,4S+,cʼ璵&fI8#)½+(hTWOc}gkz&VȸDcB3ףTPLjYۻQ>Q.visjgiy=SbO|^UMe$3],ͩ/N*J8@mvۖkUFdS<]h3dz<"=tYQCUj蕟WE&lSʻ~SZb82T3"=z4 a+p^RF¶ =( v yfvj~R&;)݌D W C[jyqfIߛzZ}zm>uLK-`NQrťe8&v:]D[mivkWRmZXu-/TsPkP*fK ,It}?˻h1}c^~+'#*н04}{j}>}w^UF &2~~-rqݫϣG]aC۾"fLX\4zׇ@ iPKAnѫp챷UZ5^|1-/*Iݺ: EdJLI۷@y$ Co y:hׄTہ`ڇ/PQ67q)+eߣryї  Xg J.ۑOyQ6S_||+b*& iMH#Ͽ}$W J?هuE/t!*ovEj^=]ܼeC6}U c~!w-S>{y~ ZM],rD>6G0$o\4oYun>ՠgFgB^{ՔHvz@ Ƚ)}ȹ̫pWVRÓXn_?{ȍ_n00\6Hrȗ I{dg[lYvKjmuG=v]dUUf1')ًTZY].fcXfmF):#53ھC#T~IsCdB/Bp&})/V5XcZu^C~ {ֳVϛ$w\9Y ᴠ]W{ׯn3-h]xM J2}HFI~Q1LjM6VE褟%O@ܞD N6mS/yΩ{6bBJΔ=39,QkJ%}5)&LqDJq5J@մ=C?G'}J)(&tco,SdzlvZ2x99̾y)Jtlv9=3ROgW>fo9PR`|9m9DCd:^K5u_.`yRx˞pFFǏcV'Wi Q-m!g`k[PnRƻH?ڨ.Ux.-:JiI|tQ~b@Xc3LF'K{t/&X/Ai %(w؊{L׬Rrr-D׳&y4B!1|eK0/F$P`d<2ߵ+12ƕPC-5 "p&juHRM*h[pbpΡ"q0B)r9p|{t)^X"BMZdǵF/y$j"7ӫj M K5͡o/_NKsNO`mo  ,1(Șphr: 3`-ZM"+j+C?;8  63/AhDӁ8#0Reca4C<Io>x^P+h&V,V8f d-MX1X?Պ^lͮxYw~\F: N'2xM)q4!`UJ^,bX&ݪPݎPq|XrfIz4F9t\v_xFr򙃼sX$)ʛE.ۧ6`>`5_-;i%৽_$F_ P~ q>(6[qz}#KQ tlS5z{"13?)6R *M hRR*",1R9K:"ϜxBöZI1}YS!f 7QO/[jj ՋSS}q\F'qUv<"OzT*TB"A~^4#bNOq.3&#,rN!RIL;NsO1P馔yƺ" wߤ~Ќ&(SBYv="O v'PfQ6ZD'W^n]xRW+QHg) >3(2XQ(2(YE%I(IX,r⻮$C:mi4yx$2pNz. LeTtKs&6KĖA '&XKN>_}mqzsU #WwCq~~Ψ?V"MeJÜ)K8͡#ez@G*W2sOp?=AROPa϶䂠\$uX8!,8.}b}9׼HjQp9#dп?Q-o6TZkaKhx cV,KZ~=NBk-J-?\ ݵG;t"V>EZw-ۧBYkz8K/Fݱ+~ݱy-v)dO31LuOIJ@b}Z >y~9:-BvKW Myv'` LF!y 믯1m( ξl&00Ǜ 5j)tԌ P}[*<X0^pHXDSDD)"bsJ׾7 ΩefKgpp VZ9bC fs Sh!U'4Pdw$h^T:'yFdXk၊hXH]*`&6_Qso5lO(CQlfͶƏ v}[L#-3SӾ<7hp!1y`DxYއ1/YcttyC-fG <uPO !H?}>"WEU%z~?ܢΦo2Fm=-bJv:NV-K+0rRbI!e}5O9L%'1W^VbM=z-*Jeq\~#ujѪPSvچzV(v]؈>gInoE}ڮ. TD躹q47BIWT*ڲ-ȅRՆJ>$>PRM5n6z5-ߋQyTayGuVGuV[*:eΨE; !GұG:_$ϽWUh׸-V-m7qG5YY8"OޖKUuRcrg3%KbZ'yr9O^ g:(Kku -D5Ӷ<Ӕ :.@\V66JwGGҺµ鼐rȚ26Ak98t+'}j zIU|9? g5Oy~P+Jm:b >H8S4QpSP$ $$ MAW5biA׌kֽɯTT*W 73-C4vsLJiE) Vk̺sj!=R 7B2Z`qNYe,yD85MT; $x= 3%la`, rՊe4}&_V^mǏma[[əݽPoagkʂredvq[lu:>˫/3Rxdz 0ɗMu0dmɋvGǪh3e볐/ԟ?mZxmࠢ=΋; >Ԍ(O8DztMEѮ0D;"#wĠX 2"EP&l]؍UBۍZpx8~5cY5\|KEG{+J+oU$hf5}PsZ2m!JH!8p b5(X*C4^z4kKH@;fJ:7̰!8|ўƯݗWOnOՔ9}5XҪ$ xr1(q99̾ɋ\0p0埿~p<]Ny R#d@I) gO/]kUd̳/'GK2l6?i2u2v=gͅ7_b79dAE.HI<=?i#: "W&Dx]8@B%.,l.+iMj#T2- .}c$PjDoza״ m߸^3 }뾱0 -1բT?",(#z6uJs\R;K@ AUXd4ylk^*t LM?W$Z@׎Da7CT4=Snn;v[7xޫd2&5Ex[jg^|&H):p"%:>pM oR yC16~:vS6W?sI-%X33 DD#w-=\[OvF7=lv^6=4>;i6l(&4XyPgkhܞ<_AEN*ZZy<=JtJw*^FXR=cgOt FTmtvPLl Ҙ@"`P')Oi6nAhQzB}_$BiUyƝײ!6$ Gߖ +uiXU/54 ]3T[??7Wkkuq:Zcladur))g2J<BEA$e3'$EFm $$>"' 6(̆;䢐6"IH 'u# w)+AX }PEAZ8>ϯl QBeH-6N$*bI%zN .%R l N m/p46s7Aj>Z%A)$yIEʘE,šRwᰖm >xǒᨂ`Nt$Nc`!xZe1:Rz>K^ق1XVbK#[E$uIA**%Ry.}N=WF!VP2Fdw17cSI]43ma/IL%s irhG#:icfGG-2vSX5֜ֆ (sʖ&1j6\JD^&m#Di"/I= t[g`nHXXlA9V\\-F,bB4/)>yVhEI!IEXpԻRAv֋sQ!#H]ZZDH|\fU$le `JэN7scHgf2+r1ʕ֠rT2·X}Qš  TzEFF>ed9QG[ _iX|TsVh]1@6T7|!=F@_nMӅl A$qpc@Y 2Ra{r"gӃOyJ*R.56=~<=h'4<_%E.z? υ鏃Nn g2[ms0=Zf2m)hIvm^#/6R#.6o2Tv0Z&0_rLҘKgRy<,Ns8|m qr|s]Ԟ]7 /̡E.&^٠ %kˁ =[rQ/RaG=&72 y^O^1_eӗtVLlAn no+P>x3N , WbH_ Wty&3)jfc,'dиkD69bgˏG ul֯ 8FI֑$Hv8Xע^fXM>a3sW`\e+d^Ⱦ|$'MreZ{mߡTM= |h |/Ftr9RJΑ\PRi |nyw\l,)r28m ̧ `FJ}qU];i9g4M-mzWf \J$Z.W(UP"0<{) E 컮ڤ/tg o[ QBxTI(Xn2W hĮܗaڐR:HlP#散u{,dR1JfR^8xh> >6 ȼ%Z{hSF bD| ۥE2GG\-s]q&.Km^)ޡ% Ȑ jcӌ4 ($5C5Jk'ÚkC 8σnƎ[݇rޓ; aߗ2)&`hg͜y=k6B6V+sx֌!DǷֆvizG#0kC:E6h˿pX~IR6Il7)FO2vmomuQ|5z=d^1)Mf׻dYrufz-Z+{ﺊx!T.9vntOHB;$q0[9e> J%+6bQamModE`˃"@545$ӆl+ ߠ@;;$x)R[+zmXzsNDp[>kkh#g$}8.6,mO@LTJ3kf*j(euBѸ1$8G˜H6%%z^%+bz sm+--LF%m$16-3M+Vu bL;=rMZz__?ޏv&//b>uY/o轭qrRlL"[C>-xb,滸$O71+0l Mժ /97ci6#` E0krjy~Üƃ0s0T7 zf#մh^"UH-))m0IdɥHޭEbҡpXnYii?ύC>Fа=;o*qS [ w9{H6XJQ%sjBlRK˜ w]՚ew`NSi)I*e8a"ʲ)S4d7otC] ҆pߙ)Kٝb N0vHχ:C3@8y# oטc!$|8-dTlTT0u򴃝6>kF1U73v% hJ#vmh=ѫQL[c>[`qiu/q^" 4͡ ]tZyd2l$w}oog=54k%ċDL_(hK.z9 FCEQ݉ ::&> l3SHi({seӍ5_7(gmL`"*D j#VBc0v&MY mVχ(,HiL~'4 Z4[#F[PGoF4uտX1䝟zK9sR/jo]AEDr6&k.E{M[a.5~/L*TA(1 :҇bdQxK'-Zb !s?{GJ],ʖ(Qiw,0sAC=q;\&P]vډ/xƗR$U!4 p 0E3}یh*>X,G7.Ud[K~c_4O?VJwL'4J%lUh8&dGim` g )|v*9K.?nyX*+B׸vD_~udi60^ĭKۆ]Ze^wFpM_nqu|5ޟKۛI Ym!َ'rX/>ys OuzW3a~'g'oΤ}ؖˤ!r'UTbF>)ӧqvָD*uo hfڢYզkҳH)$=}j\b '|uk#'تA *YQ_"Tp{Ҹƾ|w$mz̖0^󆯒qYywvq$7rdu#aOq6nktCɞ>I[@gwDŽ: ln;d޹9<7#|شףfq6VX [?5)z)b!joJE2b(PhbS!`:A'G8M*M=쏦[$\MK'R1eWF&SoɑZn@BR\Jym0/"#):UDdWUjܦmkxYOL=#!*6Eq(RoVm$ O"Gف.G[<K}O<^#ujjjiudbD{$in6ѳP~I5"4*)WP !EstFPTIWAcAdԐ5%|rփ HXV|d#}u9,2L>AM%ET0%ȶo *L֭Q EŪԜn#jx FI{$,G7ʤ-54Fp柘M]xC^.!=K*q |uY Wl1΅QNB2elXyHhcb(ȅhkTVϟ|s?<8<}tt!Ӻ5wtEoGW?^,:+*>Ӽ䃫6C(ZvU6 | mg7î;֧ 7W/[+,[2zJ w 2=$RhK=A#Wl51*pCNjWռjU;YFKRt7d;j7v>Ao(9x5u|FJ\s8#&;C׬('5yGo8фDx, h;׬ T1TۯLa=[c`O"2"(믑{b1~ݙ } p|sjm|NӘ`+Pz+Ub}m3S츠Zo6xCGGZCukgVpu*\ K"T*}9p[ݯ]aWk*%|*\L^}o$dJVrM-ʥ &EjX# r#|6ԠbEo.އBrL[r&Q$ǯoԡX n<gWGw|5lk5XQrORڂ; Rf}i'Q6D9g ]AEl- I;ᰔT bV$2;dc)(ȭgl:c#NcN<K]?f_"5j D,UnJ>_zO"H^$(1mjOU9|<_&ݢl{-Lm{N-*xhoEef90/ՙZՙZVgT8ld= Җ"չivޗ.u;ԜѷwtƵу1wbNgqyPMO0srnxa\PrOb#9e |HcC 6ucٰ7̧[LEkkwBkI8ڛڛjGf7#}f!_TY-pnL$b}^:5x֒憒}}Bvqa@<:݂^(T4J_h FK5~h (ʼ'3]VhǪZi283,U1lO,g*.3*b3u8ikrB= /"P!9S8>ېLRd* /N:E㊓|xaPڒ3p<&S:0 @ _d3ɫthxN~PRxLu ;(RҮX2nj)㦖2nj)[x7-;qpEwye!UD;^7Zv\k0npk:e阴;É)ڈCt8h(Vwch._[1wi75sTj[{JfA:c(ޖf @s-$4vC .$_3(Ҳ&K#DO%U UHI%zD7OT?ՇАzwj]t5_?xvƱ{7mW{gWZn[=}Z1\ϵl'2.dDF@5F j !8݃fRҊ0} R_e*J߱TW1ǘLc]=6i|hT(GU$%uI!A!Ec_b`Zu9cs*9*Fł*ܬ;8& *,TqztWG+ݓWUZir_1[lM-]|lFY6r`NNzyd2e-sp>wUZdѢ \'F֣+A0CFSzҹ(z: K22XQ<=6l[BH&<^դ:G7oKT #IPs6r(;ۍ=R#y$kℶPX+,wԐU EWC :7o]]:OϥjxTan5i>` dxxA 3Eg [ .0kgēPngs?p{@)S Yr@5wbVp4&` ZTkWH vs[T7>n#}?үj;3l3mo|;oL=l\t_-ћu/}mw5]N7r۫75}mSkG,;FaJp[StIs}J{hNo[I/u mon}]:۾w/nȼl_ZwyBΉ}9-/Fz6++%ߚ*?֠LUr OY{cHHsdvmF]f1k`*A/(Ϡ"zKu$uĪݍлKa٫a\㤙XRe٪TPU" ɠ B\g בc-{V$ )HMO4CCC!6n RcN;,X\*$ʹ4nՐ%"T(FA}{ @031fۏ;Y@~lͯ# pl{ /nwLj:^sbFTčrgN\x N>u̩92B2ߘ#sߞ#s&pXjhnB6s* T{?pP,XSA"1^s#Hn-:N$F!F+| ]s#܈M]s#~r[F0K2ƃ2#YAcO0"^i.x wݐ Z<" 0|fogԾ_XQ׫ |e1Û)1*c 6.gJn$AL#z- 1ϮG10T "+ڼ'Z}w`T 5!d:n owLwܠ;issɤ=xC7i1 i]K`B1HCl_QÑ\.-)ݐG=zQl|m#Gh&yƖSMV&D7`VЃGt^<r k%SN(5|)0z:`1ʒ:N}8+)(R&gvaޗOƝZ)}o0Dw7ιZ7yYfR?6p }gv%wɌagC\m QNIPATTdRHB` t ?cK:oLFӱR)4J;.al\#;m4|2'y eM>[T=X)fUYGjE6x?k6F=dXd'snΞRPM{(%fЭ5̀0)+ѿ '=;8X.k1чqL a x9gc"!@cG6ǁٍo)҂D~pH2Zk9;E'GPL(.M.SW'2Ȋū@P4 O3㌥Jl/q%D}ȷ ܥnu?yvޓh ;.?\MKzȳh,s ۦBp)(lZ>TBº=nk o mޛ?J1DS'u\C/-D.ܤoЙ 72R{mk7~#g詸>%W%4'mˆblvAZ& t_(a? -3q}DrRM׺K#8b<޸wU!y6L(}7T5 ]h}pafIy]ȶgQqP-yT!jdm9ڒsJB&L /"9%S%39j |}!2sǒviK]i"sc콛n@kcN64AE McIsAOXs͈(Vc 9-mܞz50əwѮ>V {#vfYK-mK&nLu`0;& ϥٵ/}azV Sf@fÒX{\=?'[%hxQV4ClYe6:Փ}LEbzLGΉ5) w1_HZ}v[Rr;427HpɒewR< ?!\+نG3k2A= Me[8s00vC.j=M,Z 6ڟJt}g͙Mv+ gۜF:dمnRlSYsږN3=(jmzSJě`[S ƫPsZBLou 2SnH5zoi75H1:f)#k\[5ˣLRy\fTSԵQ],[dmRMh2Fn:ӡys/ I'fÍAK)•WݹVp͝@R.%1)toryfCa&t\5́JQIά [1GWY\QˍRӬ?ny&4TUM kDs0)G)ZoBDHޯ  ϛߕP4i&]R] $ٱCZ5{$ avI3zeSP!=!tQJ9sOtYBQB~uT<GhҲ:*6T1\;e}%5buTo!)n‹:c{b/Ku3&GD=1 S++dL.8C+[ݾ|Ka-(ᡝ-cJ%A;'3Mx_)1'Zч!:$=doX67,{0|D;jR<_CC4M(BTO?.x,(CX}[WϨQ%E;"[T"edc`O<;V}ZXWcUաTxOksѨ h J4[nvM=V ~Lhg˨_KiS<'9%B;8Q+:U1(6ˡ ++Ћr% 9XAv7ݮXSBvEM?Nˈ9#yLWd"E_^ B\ݮ#( Vhw@A=9WhC;[ƨ/ ڱ=Dܗy$(B+[gH,\޿j0":gj|Ͽpw={?IhLF7]Ɏ~b~Flq").#9s}U]6NNO)y+oꟹ֢2X>N=~X={( 9oHn0o.uv'vt>?+{UbH ob{w6m٤DFf8X.QOaY2q;2{>Otؔ($)eOo7+;,"01|]2R:RX0ELK +XȊ@ӥ5^.5ƶHDUqQЛ/L`")2}/Y tSNPhM$QxJc3Xg(A/j)#'S z>[q`j*6ĪoȴD^_cUz,V~ɶJWj3Rب`S.h\]z| WBtR ]XOaR<2b< eW*hqfÊWU|ꕰ7>,-ɣـV.^zt 03G mHl"%9ODei!! k9> <7x5l.!~|s⡟k}G|폛=p)"^W \frN }x&tR$u(x(m H|STI\+[UK6E,K:FvWl@$y|`*+ }&ԡ0-erNx>. Y;;}V}n[@)upkf`tDMՕau-n󗯛yqS $)qS(9djbRfl:74 $qz}, AWW! Х%Ao6/oVIq[?\[A_I|M6}vQ~`kSs~`,9${ D1O,1D,43R()9"9%L?Or^(#9c8KkJGn$}M9+ +uh|MOSu37ӿ>ۿ6֫r _OOYx)XJ^-Q*?]vVBSPD,%WvUezj$Wg`nYƍ3P2Joprl!2nji)UhGrrË9}j5$WfӅNSU\tb3A"ed<+xn0]Ո O>oȲmUAE--ѡ&9)IIF [I c,ɅWd*P+V=Y "He&לTA("TVɘ E2I@m%`liT:\x_.KAc҃xgr5/,pLbb~[t(N,2 Jղ1'Qo6mxKrF8A K=vd.L _>)wMU'W1LyIe~̆#S{uԢ}ђX$S2>%", 8z?YL'=qy럛]Sjp]zѩ;Ut7T;A)U Tho)Ilˉ:t>s2!P |Pm JLKN袵 JVq#&lHB |5qƪ&3)V'"rAHlE" ^1LLzƂ]VO1s1KhpvɾخXk2d${s511ޘg?!?Jr!8L-ЮA;[U-*x8*xC䜂w'\_X?޿?#ʯ9GI'e f !@aHj%6i%Jы!Ym )B4dE"UXR&x2O垲ؼJ)Ԅ۬oKRED]OײLƲC/wua 8\QGz`,oSV=Au Gp-xxTw~>ic\b.0m Ctcߥs09Ⱦ1ףsi "gtܮfn Dq0<o^|`u/ٝy}b̰6*8TG`g!Jh2oOa՘ުL*GWD{5x v {r'Fڭbz놌U1WbӜ6cuv/șk POmZa䂹(z-SQ?oØQ0*_su+3c؃$g " StөֲN,r=:Ƴss(& ױ;9Ψ 1v׫T?}O=M33Gz^ c`w#£k:k)GhlFtEߪY4$]!l5vOaNՆmCP\ݜч5QYRp]jf.O{8Ǔ'}6 ѹPʠvOax;BĠqwP^`PIlP&#)8qsǾW f\#_0;Zk6J*WNfvu#h7|Su0\`^~l'V裘i7ݠvW j˨{ sRۊ[FhA7`#hukIډ>ð]Δ znPЇUs.S.Q ʁ[~G~Y#qz2$;/-7_0DRdJ_fDlT#FS%F9CzN&gl,ٔ E"P[v%kTgz$se هSzZB$S[P6pi1nh‹{%fp᜽k֤uЌ<"H^y*a: Sm[0'9>lɍ<<;8>qabvΒ5ʃzY0G#x4ok54yّ9GOϛ_`ľo$ <8k0dd)l෾26~08jT"orv}|x ~:z#:nr͇7w]?U[7,v6FN>5N{FZm8v;>#VT^\2 @uyuJsxwٵ1WRQ'i&;5`8_9.`/Qϳqt:poPx@^H.N wjԠ4䳀>F+ Л ,p [A.J@Ӂnhv+JcfH@Ljڽ&jw8YG΄A%;u5Q;'<,A }83v:H $inU_:id)1coyZ GnP=;P;0cCvqHih܀Ϡv;9={jXxU9YO{ȳܫ8f7N]oGW~ Տ~ fȮIBOOĘr߯zHJ#R"͆ X"U]GWYXBv{Hv%zQY5 wBv+biwȎhQ9H #/yd SP}(KU &U ]UChUpytj$OyDP]=F\>TPșHQ270 Vm]FBQ6jp}޿+*se"I f$iPAJKro%,imD%šۘJ G='Emg)1$٘>QEܺsiz'7eZZZ2Bj=@vQnh,pB!XuohF#sL=>LpIuLZh~Jud6kNէAZMYMjz3EIM\|՟8\};sOnWp2 h; ;Ge\d ;L"?egd2Z]-iViTg|<]5s>OT˛S<$ +0K K96o1dD*^J9R :y,u*|!AB*ā\&hwW)I5l9ŹN&hF %2:#1ZqTVMϲD 1L =&UMf63h;4% 7h4YPq?/fV|rA?{,O(ۋnLޠY)sf7-.0R@B =p4Kb аyU&4j L$h'Ў!MpOY⊊U՜娴4Vv6)}8cv(f,yl]K!է5&0~h\Hu*P҈w(vJUu恐N@:0:JKdQ1 5zx Z;6K`:w ј#Q<ٷtFZ7&K!Շv/6ց0{c-3Z"2t`p98n- n6ZLZf"Z YUPIϐ7P!dfsن­Ty^A8`<&O9lwdj^K"E%s76Ƨ''=ȓnAg:9 ~rKGGC? nϹo E-BdU57b/_#Od~ŸXэ> NPdS8WP;_MPBIho.|yt<&Ε STQMӎC|iFD 0lr8=!HG|!A-}⽷mt0ʖ{]6"|yh-HTվ"R:s\[hX_[JL kBc}pXh8[-t'>֘tܾHY<T?Q=CFtIֻ ϣ6ǫYeJd`9kU[[kj򆥲ڵLht"c%q&o T4PBLP+KitFqBR_XR\gMF1hzgI_U1"vug6s<=wzy5: ^W/tW|}n~Ew#'3.}˳__O޿/31iRUe0Ze+f,3Bn:3oLa$![4-F ߙ~xw} mHV䷹k ⊤qESTw`ֱo\:dpUSN}@b{ nduv.3% RކW5qHKhC\2G~ rٞ%rBzGNrRn"U몴]y$dK([Y[ )pWĮ/۱FcwdD0Oz4E剟QE`vd2ݦ/-H(ݑxK& (Zf/+xQ0k>|-dc[VٲpIZd!P j"z$B锅pWƈښm1{%Y˘`=qՐcLj+vNnd…Ox*h:XcJ@:pZ0B=_8s|K(YvAxse<d;_fuM޳QJ[ lD0p"SؗH8k?8G11ͥeA_Ӂ ~fA1^gM޶W飿?5]B`T%ARD{QGX={YK}Lx$do)X6Cǹh[kbQ#O*q7 zH0,.u}~_ÿ^ryWq:o?bvwkEzFZ*+!bʎHu/ޕ!i<%h\iύ9x[PHCEХyr>nP,cZ8TZgsxfqJ4xn3 @'@tZe{`&)i~2!+Jo8_6MU4xFQK[8 gdC狤]6W̍_i'%;CJJ1pl;daʠ3w؛^}?3{FԆuyg/dp4oSaEy&ik.kF~bkKWyLkn)܃l:i3"mMیׂ&:6vq_i~tDzՊe .BM uT'f4v=?-'[ ʢS~&bH<ViYU /JYRB h A;b UykRn(hD$Muܝ7f kh <ĻMB:]HG 5)!m3~FQu/[p0Z](MnFʧ acSwqOp#A{q !;JZᔵ љ*vBqgAw|yUonGSwi6.H4KPs΋ k@o"" TC9> ,8yXi-3eM[3Lyd'T +(dLDe?&b2E51RRwen P'CQAآ/TaRh|.ma5w;s و/XXrӄrx._ϕkͿZpEwyrKJc sN):E(R[ii 3b5/E+6bBU;zPLeT-h'zOiXҤ2TyIW9kxK0 GܢU#,# lGi sl)l15\+#6wyuY &RZeJ.RLH iCR"Jmو_ S t.s(sFlc9V82/5%F.4њ>.d+6jk gNx ɵdzb$2t4㈁XZZ.dx,b)Q ZA}a*s2h-IBpJK,%*/JI*8%!'NkuR{ *ZDФk=eAb0+ۨ*eB"RJؒ=C,, N_Oϕ]xthͿ;H. }๐de8s TNڒ[m A;@e 6j`ʨ}Jt=: }+vA{Tj".Qg '̾ee,E Ƿ_#[l_?>:Yޕq$Ї k.bC"38$!%Ycs͙_UWWuwUwcjE k;iG6#>N;W%y:h2>->G ;RVG*Y]#7=Jy5ZTY?Cl/12˼%7//XeyǘH(".Z<'ϟ-[3RiS&P{:ŷo̜""C8[`zwλO:?DB%(48|tS FC&+ƕ%'Z[P}oF_;sfǔ/(7Τض,=ƉܐDj2Ltq<7% (gFq9 c<7J [ 2\D(m.?4\: 1J>{uSm s)F T`Vs$P4fH"sV[ý\r)2Ci(2EDQRbkW7$Đ$pGxp.f1*XiAa RGpQƄ\A2!P,6$dKB{Op0k'"ib({d}L6e ,%#M%(rdu3n,0%X}SX*t$E0s'{ kԣˮ r$d@`,Gחye?"SIŜ,OVC.L6k+4Qh)+<*!JaB9Fhr&B7Db<&nxZǔ _)Ijs̘Va޺c2ė*347}!f5/';@wλ}%2Wխw[W.¬oVh 4/[( |e_ [T#Mk8mh3^kZi[ =ug_PߨNZݺJJ;^RnE%OKZ ,qo[ ؅ݘW:w=>F!:|-q֞ѵl\os{l+[63 6M[]i͠p8NVZ7w *DzE@P?W+ T3NU5]iK,zyct]uiON?I #ҡPǒ5Ea+UM_ZIq*vk3,w?ruD$sGA)FH-rN "hEg+ tJ{M j|V x"q˅BZg(-O@|YPsNОE&!,Kŷ'ƀp/Pa{ ,Wd^J)hn'$BVG f븮![D%I}f%`Yy T&AK~哷FYDAPBnXꟁwMI:|9Z̄N c!I3!wb}GVO{(ɟQg_ G*͞zgϲNIH$Eta`Z%;n(?svsi\idzӓOav=y+a3c{A`tRGbQ2wbGvddmkǝfh'q~jڎ_$32峊I@t]4,Q?"Eru^~*ھ?tsbU*.68MY\D`wS3tPaV•Vl'cgLއRztts?.TE|pdr2-Sw#G|lt|w˻Gv0pgIh2|%' L&"Eⴷ5O5֟{?kbvӁŗ.7ˍ"ulPV\qLa&nV\8|3ϋ[o'9)#ǀko)X ύM:W`# wqgsK?+*#z?mԌMOիoO πO^T?*>p8cZR5>{,cpy_hx׸Y(OūOxRJW&hق5@)Χ>vt6~ULF꽿̀# .>y`q->jO$7PQ|Jvjf׷aR\|y-XJ̴uȊ/5V]AbG망RY}2g auv:Y@yx5a9XJF\LXMx~h~agg<<@}66bޟn`~[A[zIY:k$4Umr?í*+x^5 A:񯙇,8TJ -ӬwIM-%YD/lQZxyջ7౛2fҮ{~*F`9J26w걂5 W0\`ʖWU%y4R XPzƄ,b&g_:Vؓ`IBh$ć(*쩟B<'}Sx5^@hqmj279l8x"516Z4d pɬF ªMuUL.id}*)|3'YJ,˂t.a`i\(C<!ɐ0H&( )ֆ#,\XiL܇\ o nv0r&a<mfkvZM}`gFks@@eR;r@ C^[S
 xi`K?RQ| >!f,.3y V relHߕ3L)/w30+t,k@`JCLnfOFI  }04ׂ3匍 yFNPC gR"B99 Q- H+3H3<ٻ&nWLHTyyI6/vx/ITf-$;#tߞ!WHU>%@.uzȹWʝ:G* <)Zyހ3N܊3dv+E=1x<}cH0++bՒ9yR0wJ:F-9Oo=4Il{q'-^to8D.&b>pP'X>z8Ev=).߿)+kum^x_MQ7E{str!e1tXа3+8D$sv[}c/?|ؾ_Kn]њ5&qvjNq KE&Ν!jz;OgDU9yN 'Ɏhΐ`HDQUodjNk5"E& i`ȇ]"kcS)(oiuu5e@Y/3Gv2]d5>E;qYwb+cg=nkxoJo,ˬʆU{v'Hkٰ?~@$Uʴ[l*8ʕM*:F'F"'P__kiɜ٩ژjhK:p?ë1i2dc,L y#b1kҺJ<{&oOL?֖/_r#) sn[ 4ƙEɛ'@M1H}F:ɩKԙOld M: ߒP:x#M0:$~5Wz۱,Lzū`x@HWoX'3+#բzP5qiA Uہg5UOWޔuifOl UOQdiL=A=^s;5#{ͥ&qXٶJ50xWA26jg]ZeO0%ƃB/ 1CeprVB6x`M?=!V~AX5W[ć=q }(f/^D|4Nu/HbJHbhNh!߂GC3_a]õ/"Z21ۦц^hZ%$an 7H}.Lı'G˝W\ʬ{bcJaSw/P%v8-dX֟K Q'痏[Li4jMrzc TN|ca|4Hmy3T)j odE+Km&Y>Se> ~m8ܿ>a6S{Go^CI2;ʈ>pw%Ww;j#p8F^s7K]x(Y4؃ʱnSj,|Q9mCm[7s[ `\`N7~=Z5 l`6Z@4K:.'(|J\Ng7ߔó!SfMN.S(ж//b"hMV#: BU\NS_4iG먗cSN^"şd{+'D9aF>7ϯ(8mcGԙ=Y{6OkI=BvTg}΁16JYa>ΤNAθl}O!s)`m{0Rу?Y} j>TpTCvc!gLZP!u:I/i1\A[E$a$1r'yN>>f7V*ZLπ3d5=%M1z:1(8V&@V/GGO4w)v2Zr`c:QD ,):_+pn'؍#' s.g&Fl'p{>! n5FL'Ƃ# ޘBZZ_3)"8vŇ \- ہg"Lpm? )C6ֈ54ooYDؠy4JJrtwg V7yB8U;"1/I+ޕ=!{٪HjW=j˺vDLx1sDӧKp1Ϣ$ l=?Jgc OO"qTk٠bmuy5i#:)ryh׾hi˽>Vt{h˸3w>{mkC7W}y *3ߩVp=b#u|pb =y=AqrVĜ9a[33={3+ǒ`Xo@Nqq/@MU N#<^tLK”{0S E *JKkW4B[yh 4]teѮ8ZC0oEg0OTAA#ZL7`my5CZ;_9_H8ܨZm o;'ߍ+9}q|'.6|Q)ۭe޺ַAjpkA%*ϼ!GAދc mf>y5̎kΒĞƾW}W?E`qطH :z-ހ`emCRL- Z$#se|V_Fk4*U[F}5uǙ,dgQ?L/*^3H$_w#ڔ&w)HVe|W݈g}yQ4NNJY7_Jsw? Ѳ"Td݈&៩*ԹaF^Uf_:p΃Fрr?YHX#y[eu"جdI& KD~/ݕ`Q+70L\U>@ig7@wVRr@5Tt}'pEYSP A'sĂHu_`zQY\o!wAK0\VĩGO&ӛM.oAxoxI>~X?ܧPHI~}1l;ඥ 5_5{u׽ڿUjc-B[(8_]Ģa&Y?,9baXr{6E SY7ILJ;ڐG-s^a`ETӚWYt6 %ŖH6aʡKQzQVA|.V㵂OyOkp._&F:'oLXxG` Wnq?1NS08i V˽&%a:գ i.JAQX0oܴ͓c^5 d`1V2. m<:me v7e vks|$\@h^jOD_rCXyưڭBO.4 ߢVάl'r/4 ,5jSZY^ qv0~,N_"I~]JMq;~ j`>󎫆Ԁ}`fs˸m&vP?ܰj5.-Wd;mYk\v^]ØƥEP^W&Ņj\LeRL.v\U-4p@7W˪nВl#̙/̪cԚ՝#ohLO%aTϚ I}`'kXNάMڃMiGIfw'톉hP9X锉AEަlzYנbRSF\80\.W)Xbywg'"%炗eח̲Lu1R91F=p@Uusr3qJ)u/L1b {=+8T2dAA#+Ph{ڠe{}q*fj>+B3m&>4ZM1U^/4 -Cvx>1Wh^ٚ7ł%2VˁUtezb|u~_.4:GlTʋ74NW:V45A`ņ{.6<&H}t\CMѽ[q )s荸~ f"OøHdO5]iadu#Ϫ=sFh@Ǎ+BYd'}h, $4X,qԫ#g1Uje&ح.G\.:INcJS+B1B@U d%mUk|Xը$  p$ $5\' } 2%IJ):E4xu jBޞZY'<\ެ&r87?׳6I,K^J۟bXGj휗e~ o7|5w%}:F>sG+daVGZ,8L>].t7)~I<7`EcY$ CyϮ,~O.="ڃ2OHiHBr :\OºQɺwYL &YPBK[$+ |u|[(ʈNU[.Qw-в֭ ELqi`hYR// x*)nQ/ Ӓ)3*xsjh;uv wp?G_S*DU@bScQC`D,ŮXC,g a&'tGoI ptwbn.,1R'ZSvbFb2*Nr],G!*V6I8D]jmY[VFﶌXE/TE^lRObqb-3G1ҪSc )XD}$ 9d@lIfi0%*f2Hƚ +e~[vPƀb|X)_c,KcOQwٲsa!¡=,>P=Q+k`Ac dz`~23+wĺCg*{tzAOP,;Y~6=L];a\tD~  {x_pۍ߇Z_;:y(e^ _ҫ-K2i|֐{=PH NWy?3lJDK%Z;G4*R Q&r9:C\wCpH 3Z\ 6xVamsZ[˽hr˝Ok9@rJ|PU?OcReB~[UxD@0*(lYC mou  Vqx"v0J nU`v%[ kn[MVr|-PvՊ*o7m9U&V6}VdUɞNT4SbTWcC"U g\.ĢlW3{VƨFpAqLSęR$j)xbЩP4)ItL䃃źH:dz& \( J>%@:##ׄ|"Z_(f&N$UlKY/+KdIT2p5twoBƈL8KldMrC?m\}+ٕv 5gmKA="I0FDN$e01"%/9{h3||$~yoZfYH8DxGiM@R=R1&(]qo,37 W,q/X/LsP$}$B"4wV߃4W[o-ixIޫ4cdM" Q""PLZ0Lbt.,x}9oOrAxbVI8"Ի4OA0xM阄3x}S>v)BҰ3jB~}J@! z9 A=1BzTz%nW)!뮖߯ b eϷy (Ӗ>%(Ux=7HFajSyJ/gwW9HrD}RdhgY̓m֬2xu'%F xJFo $ne]u쩌N WXe7lÚ":T1a$ {cRvЍIBɺW11dgccQs r05oKƈBf,3}oֿ&x2\Ҥ5{x7DBȬ rr3Sǟʾs3 .<򄦔Tm,nT^h+zb>`dx|Vsε&:Yd۳ \ery/\M>X6;rU<]sVlPCp $Mj.Ic×u:¿P!5Ӊ0 CL`4:eBZqJ )D#<&TbՌwywpyǥxe=u>¥'y׫ aJtb7E'"Z$\!UVҤu 0ND6*Mߗ|efi̓[]ǐj}K;VZ  cLQl}Knxg^:ZٕUhSJkA58!&(J'V7{' DT]  rlOK\"+&)@h]ʴ`%s!iq*Ł) XBЍVy{=q+!AXTitUe]xFO갔ջŎJGx KYyݍ:yFjwzxw:HD=  xq[,-wuzN#$UVDzԬEY?89jY[~\V. i}b`?bzRvjd93uN~P 3$Z?MB\;رl* Tx3v1?*7m0h)a"1AFE !%VH$:QQ"% $:EH!zSI$bDcarʨT\+%\hά$5[^$@iln]F2OX=TiU@lWwcWCGu4UlP.i*`! a!ʩqUozy)|v׏rlɓZfdzmݏ[Usr.{*mE sr`-aC&+8T3Y-+Ȼë4RZ1r# OAh| %Ovwbfe rc^5(MaA)Z26P1TX5"!_.O0^D䵽v*|;=$`vTbtpC@ ` Ry c֓vY+T $7?V_L[&ߥx{z=D򢸼@ʠW(^Vv>d6 Tx)}rF-,?nL-/~r- Q?-wK.9o=t5G>{|9Cͷ?̒_>٤\l@hF*Ɋ;&=ͅ|.4}@Fp̈YQaچT/[^L?tyǃFuQ@a3ܩQQiI)/)D)O(UjdLR`;v+j(vG#e촅G%IƵ7'2r7}:Ofs[gKS$^p.$_qS":< L?%{P"Ww%=`)qZ=۔V/d6'- (6oS2b`,ȩR*)F(S8F#U)%NAǩ,&ܱR#GNRBJrۡ7CnD'jZW'A+9DܟLVշ?g @n8cPyx\՗ _]kK0笏Pqu*lO! 1-Ĩ_ ~ޯ}1 Ejfqj\4.?Vn%*[cjĤB&& ^֯[+ 5{SS[QnvM}6`)h3$g^%J\In^C8 xv|$eWƭ۝֮ f@G^\l/-]9ށϸmsodP6`>U2g^܂y,Or$[a+wr:o|)<ݎyL4+y\¦JDx\=.KXIbS !A_~TբՂSMf텻Ŀ @>-ە@ Dg=ۈW=[7q6peֵ}ԇu^_1d9jZ8<"yZ!H.,@~XNVhwiKY ?;rMnu߹_~]ڏv\~9N1V*`L9#&P!<~po}6I,yro/6K;3OfytӚx<ۗ:6_7s>K5I2š7>]<|tMS7st F 6 ^IdT8+7{,t O@elPe$,/%1p1)O$NbA/'cq{5^4_N"}AV`^+_9KXWWe~҉g<9u˹8k(=ǤC^iT1Ts8M(lA(O 6b/H 0d9ui?9iBh>j L)D\ W)Cщ+} {UMw'_H'Y QSVDntL5\K~rO.tuMmǔROŸY1#A:-mk1a>{JqN5(en{U)P#B$Q&2Kʼnw{\Y;=>&ZQ[bx1`#*DMPɅT 5 QTiR ~~܊U:ہ1*ՙO])JlO+]֧bcFK 9k`q=ԋ&_WuJJ,NgڼHId'Yy%YYQ -8#/|r?et^ ŲnvmduzƈJV,7_3{;"`cŌʐ\ZŌ))e$8giIpE؄S=pIN홓^..?p,V+|Dl6/2"Z\/yuԮK8Fb߲Vt![>_ܕݧ7Ǟy÷%p*Kt= ˪>}uMs/cLJ!F]M>>y˻}"2c[-޺yew?mJ^!/'o.:_丁vwgw{HQBe 6l356D#b9Ro5G%ɋrbk^voOnMZT|gD;RH)ԫwJHHHo)~lSӑR1"du5r.&%ʎl-x3FJȆ#b:bI1b3EH- G~7ɤ(Ùa]Zn)]ZRT[+[43ƾ"cח+$%ިr:{<% s%e,Àsk:N/G!+1! h`a`!C"෤XQ6f9p@qQ b*knQ!Sb262`QZG^2LaR',r)JEgrs.+]U`Yey/}&Q/⌚X¸]xVՆ, G4B8( DZ J@ZJj CM (QpfU݄9Jc%;ԅvv;X]4oܩjN4T5A@p {kKP> 7TZx&PT_h(T+kׁc< #i![XFRK$&SUH \?tcZ#KSE@ !b@JjGӓ" E 8 @P1bː44BLe (†"2#$`bcEARd Bx,"x|&![Fg`6f Q Nf CM#L $9C1҉|s٦5I6i&4\7 5B 5HB1FWTLpMTX| ~.[&kPuՄZTk) S?9&xl:g۪ꚝ ՂX-H1&1ICr[$\j=C\ ~]7J<-mnD;[^Q@k#ۈ&0ZFFۑ=HVțoir9dq] Ҡ+ 9itҜ `S:&(ZP:Va\>Q,N)N&_}K]6& =CO~ gr>=gcF%<@dxy 씱r>_L06"JZyl n~3Ei t阔|pҷڻw+k}w>Ĥ(lhnnWiî(a?dD\nZӭ|\ףW"gh:I$ȦG" H2]bO+J.H<[]=wibT|eyO^̮˛2MY0L%\kpCnH߭l+1*;=`-QﲸbU(gxqx5 T;W tШcʕ~ a-$u4WK7p1DGƮI`3JWU:ҫ80NU8 xp8p^'!-&txwpؔ6CqJ?.J᳜ZGUSdjQ9FZ#wQ Vy 04hoIX|=Q~LT|`?TۃHO2P cǑ(o&a2uU66!Xs|VRԾaw rNm_/7@CKcw_֝R8G$H1x]dѺ.')2@2!lWoj'i*znX3飭`bn~R;n]:$rBoWYūf\ueսI>=:|̇)^g!? WDb R`dc:%SIP~)Ip5+$U] SD0RzWG]qĴCvR.'{ @=RɁ3(5߳.1VƒgW>FNϊmRz)0k=w( C8Tyl;^&<9PZ(Ca-In;ä&B9Mbh+&`z"| 33IF<)Ϧ0E)]yhWn!s'%'^bT]'!}c; ){|X=$]Jaf֕UIXZL+.djI41ĒK8Λ,c9N=wU1q35d~򄓪6 GtntZcX`xr4BTWL6~2Y,qG$^jW=ysR k=_2d8dgPOۿWA hB: ƾO?xYbc?&1)vi1y^||XGF@D"BͫG0-".aKO A>F ʊ&m,v'?/ma]Rc_ fv2!bn>fl }|XsC;]+@YBb naԓ$`@rtfΫ'L%rQ8::ޮ60ȡӿzyF5sq[<o*НMԼ7=LhD=whl#yh5`Ia.F2 ڀ VNmQSn:T6M1:hgAٕuRCo^ؿ ӸtfMՊ\鿽_tn}њjMk=uTYc>D)N%CwbO8'UXzz[x9Й5]2TƤ0/ \sg8*Ӟ@Pcn-ZSb.KZ7R>@NlxK{h3~ 䃟0v>DB楙H$&\TLY!W%661ufkDB,#`ThӖ A'vr˟/]G:Т$,Y!d?D,5eTۏE#4/sn\4+JKߤ:چjD`e3˜7 hd"LgT,D#!@T@96ޚIYҶӭo%LD,!O¬ pe5ܬ01oU.SL d*tӜj&Śmm.li%ńiLd c@@2 U0&*sl.Y6su?ߛ3XlRHғr}mvwA O|ݾRThBuۀvvRmty{|ՑmU.iڇqp1a$Z/ cZ,1/p *ǣ_  ga}<͊2^_IKF|T[~9GE13;ek;g t":l{c^yݼя-8UX[v+/7ד[T? JNs$JjR.ɱik5^zc[ *$*v8{.*JģO]!V[u=n3it!u*"it߽ҭZ\s[zc:0LAw.;f2`:r[\yHdb[wtgߤ7tS6=>xTqa{#-Ge_.pTKCbO)=ĔWGT9ppyE4!ް6i]pu,\u%K%^?Z)d fHV*J^(LsDpI bU2%8%W"72E%YKRKU g.nnFނj*<6auNں:H褒pn}[Vf Z&`9$h+3R J3D41($L e\0?*y(a/רfBH[7}FΚƨ#s`OhNizrߟj..+ܶ&M/~?>%>&S!%C\Ŝ9V>ѳ3L kF{|řBrۦ$pSMP=b ]o1165dcZcy<9XP^qɌђq&]b&+E.I/qnd]QŃwSSmBd "G)ck>ZIti0ln4s)\v[g z|1$% 7D_a2Dw!"B7v}O~&8^d?wp!]bU,%)K3MPP^B+H%K<C+CW&wdv3(Te4(%)eKU5;јσ6j ɐk%{c\n]c_1D^η E^[U&P{{9f'쐒ҼpҾ&[<_*x.uKSz8X$=CFC*q,U=/{$GU%vpB t^"bͼ} poDsP8Bs,ĥeYfLbh)ȍMJR6ɕDZ[U$WrT{Dp3h罃mr`rb||Q=gа"S TIu T R(u;~2.HB.WLhr#V03nZ@R X + F%8α8|БBH GLc_/sä wX `~ ɻ/{&13 ܟ^UV? gՏA60?A; ~K&ڈusܞ4ƀY`ޫWc.ĸ>&PEUFEO &~> 4`ꀱE&ٻmdW~9be~i'A I^2 m{m\?%jS&%kb3c[#+Ⱥ^?.fHH-pN 1!D{|6mw8zPqM|ݜc{@3= r,J:kguG#gg3']F C 0YZq1TfʓdʘMyXҜ%MZ##IG۶k),(>c=i z^9>v(:k"危v׈oOg9X~ .2!Мh#{ |,]]|jiS֚.>CO'Y4vܺ]"qgI].Cl%"G\%fp'GBsGH.1US(sj7šw4⫿7|7N~ 'OI ZyJ)< A:ɍk5pvǴkno F: Z=ΫĨkhъ-t>M(Oi+~A;ֈOnyŸiDOu+Q,=:x_,7`=yN(A@`%M"QyVOftd-b=ʭ2Mzޅonso̦I>(ŠF3z$s$!9h,ugk&ɇ,:cLĻhjI ϱt n^2[xg؄,%,Ԭպsĩ<^ќ.,]FzLe=uSŖX'OY^j}Ipў5î?q= ([^:a0sfTaW7{xkNfv5sCеyy2S+痫SV6%Fgv5SL_J&3%OVp.=нZs~=I+Q'4漹>kDKE@XWvoּp`|Mŕ6+u:Y/lzSqw,7EA$-E~#ICO vP0@PUaD+#jBUNMfg2ߔ4Uu\`{TyM:w?;\.G5]  <@ 1^j-.+'+;']h@;= ZM.?*kϘ>ƋR`5E]kl)RކlټҴI4vrQ2 O?$vj~2‰#$Bv? EDzo㦦᡼)xzgAJG3+hCn8hJ>oԾv`3K﷝ !π3H [luzo#{?6*^]=HPH aUasY{ldڦ[%^]JJpcтJ5 D &)H $nZMK嫤(<ŒA$HibSh2QhD*,HJ8U>$"P\|?= ?¡΅0/sa ew;f1ӟu 5SBQօks6x'cօZ XUڱM#fR W:`Tm @ @:2vT([|@.]\JK&n&*7kڞmF-:i"̟sJZg3;Aӡ !ǀ\8 TpSFHUOYspa3c>BӑV03yMrDqmkBOPr?NbW-dН(~GLRJhoİ{-E[@Bzx( GXJE?75o\6_vi߅]b8/dm@}|)x[ l̠r2c=-4DUg$Hį_/Ƴ@qYŊL)aC"$&(Bŀ>׃_VmTli:`]hn%? [o8~s܋t2fLThyfCwqbpps5.Fq4Z=2"cիeʁxȑo'D܉sy{Jf7ڄIjrS˘cK%8%$IBKBODb kcA+౫r1 &#|pP!Qj.$J fQjR$|7(5^,;DH(5d)`h> s3,^3(c+ЈB4z Nl>o7O ZCml%pe2U9֫l|cFMm㧅h?nS+kZp)F*LdNx>(?j7i-&biiJqg'A4Kܸ0xƲeo^]]FsWnh+X4=\8bk0@U@zNj]$6Sd(?M&A~Zz/ rj|FW1ƅGZ%0ߙߏA/A2^AwhKč9{mg2 }gpndo c қ%iK o̗ > Hgh0`( 8S`>^wIZ~wEHV`g'&37GP87'A=.ɿн_93H흑;#wE-f"8LPQ$8@$BBNAHbQSZ1s)uZ !TSF6*H$@)&69H1 YI"rY  i;k 㧫tUtmy dGBChZ]99X]dl32;qFHm$20&.̰:lw+7 !R}$v7+?߶}.ZXh\sn1r0f?{f8 F 0c3Sfxj}ļU4z4rnQ.ӡzO|K ]<%f2f]+cKsv/I#c1OKdI4٦dkad#Kf1ZdJLz$Ryc'ڕ٪_ˁ"S۫Irk;Ph,G9[9 Ow!4ďzZc8%笥5o~3vq~<Ϳ ߍ_ɓ~-Kʮm*}q˜7NE8j/a0Zٮئ+;t(KpnQ!܊W2$ $P4WEk'Qyd9C%@XrLgwAەm0M2AB̄z2k -tqE`sA]"y{/2]wQsv|r>Wo4gn ;/BLɥ3o .\J-cͫ<ʜgf־ˬy":A%^4l cOo}hXF)S/Kފ3{TyLX#I4U a "`418 MMR!S)(ݸ$K4m2๾S>1nR'F҃?G-x.P8|%vՊ.[ZQgE2ZFBgJV<;,rN3MG=aLې+X\-qbD n5 QXgOd03F^oλډx6v}:F>sCеyy2S+,W^8)cF~k>oM2M=Y 63'tuV(EOEEhB-;]u-$3o#H`ַVAR-0ZT%h{ !k NPKoP u:51"jԊhjOG4 ڋh:mhʟnTQ Zn2ƭp{F{b|MӞ=CB^:UqOon[K a5;3s=8ՃF#3sp$K<Q>@r1 c]NRA$Nw1kᛉ !47[?8HƋ*L˞p1zUlh= bx;g%mPEe,dZ%sB϶r+bQ# -t5ϟ 7Y;V MKSV\`p+_Qwq[#ww?׾M\紐Z[}D|UZr4x@$[!-f MFw*-u%3@c{>(Ye[O}ן%Q  X|ل(AN;a&> }g=2soMfYdEPF'>Ƌ -B1,fh=i3D:2 )7?$vJ )˒$kD)c I\ ̷rS2k۞u#u:LFJgϭLܧgT6_/x)W p1MmN۟'(!EUySzN״#Ꮨv?v "0‰P!Ԯ; G(e(1L2/l;؁ 4BQJ!VA2 Bʪ:%Pf} ?c>A;j{M O2߱[]ڒWy=$VbW6KךQvu'uUDwG+z`w\] %>.Dlj.BbGjjN9/+;E@q(һvzB\6P7dS^1inuq?*<{O#lhӃ`!DmòjpG@$þgX(`3,$=3+޼>zE 3P-G þ]ࢲpIx_8`7'Z ZTRH}go2=dSͻuU/{/W/ǀ$H9 R( ) A|aXke.8CUoDg #&y" MהIZM A⍒| Wú_o5d*9b.'Xm>joz%0>3ɝO}rWO/YbB`xJCiEc$LRJab"=ioGЗn0`gƆd@-N$QY&%6/I2D"U着ETMpԊ?#6+@ B7F~2ĶJB_]p'ߒ(#8T lZ11~_TKa5b @Ah&d 6tQFw5++kˣ<`[{e=0MauUoem64̪ҭm ]ͧ&^^)VjzccbaUgo5zKUZ3YQGBW^~G5 ΖwɸN)`PD瘀{iS!)V8T2I$( 4XJ ޳sDG!k0P!,D6ZَnFcI[p Vkg5"DX E-0EiKa*x Z ɣָ/OF(HY;.zS^7~j T.4>*)8ZjѮ Jv@ !FWq)R54ڢFwB0缻\筙71̉Pr|܄LgoIx1pճfeZ)&w\1x;eF`фL3cpXZ:MC /֢fP<[![INS؀bbLCIc +_*uV2D W[$ cB ED.Qr yk#1TӭJL:4x; +4e0JHtGxMq@4QږY^aR{ pu ]Z!Km,D:sv'Aif7ˑPjF d,DE 1͕2mK5)dCJa<1b*)1Ar#T@=P̀Fcz ͬ-stog1>Ѽ^g6s_}qE9.f0+>F,¯] ј-ڹUEPoˉ(Jqsݪy5اɥi(9nͳ=YH6:5Yv1EXnϖCI ̫~x(iylz`&:9lCwjB$ǕM$jWM?rIZ ོ#e\aݐ, :% "JSC4G9t$XA -7 uUo?.ݸ4V|^dCM%)5M?2h8&p2>ىqNz2|y7(tOIQ613]ݔӭ,^+ٽHH-Hq%4z)zo/1BIyUާ {E}~X1'Ht>?$tI<@(vu?3}5nчY,a}pKXL+'K _9n9a4Z/cY,&2++םh>%:f0H.TA=8W9v>RS ޅ=6DiJ3ɇDdOp{?T"%/v`S9W,o~M}`S& 75?.ePk=e*&ɝ}X#;!r7JGMcW؅f~ ,8O6P8J'E;bȣdor:(Gô폕s4ss#:Ćs[d9zp.cx_"*. 1z,4vwf#B7Kr<&wVOLoR6zCmτ O׃9AԱ'`!׺$bIΞ94lȜrb_Ȝ\i3I0>=rHyFdfd0!ů:]i$I9NY5ბwuCI$8e)לN"=}t8ORiJΧ8tq"UAz˪q(O:+&{=D3A45 rIg0)O1׮CrY έ +n% >MnaŶh:*Q4D%pu-yk9zp WL8>Nrr?(Sb5>k~0Q'i#!DGɿV OK-?8hXi:FʤyRW"#Lڦ{%gGI5NJ!HkyWSs9]jYPr "9w kw[BFZHXaHPIMa:evħ{ &Z诬5X)MchF"(q (3-XsSz E|Q%mZQElv˭8 *h U]t)(VT{t0ApMWL AǽA< ±[&@ڍ HVhvwȃ@A R?t߇/@tcQ'E-_:i]JlTY$ D-*$)U:x77 ,GH@-TRQFDK,C8C`x,c0,kAīj/n<ܹOWq??Q+U[__,DUCJi7!Ӡ-sؕCs$ZBݰ="frwZm(- c6AhSre [-iSnp\; ^)oX͝|<1isȧ:A>)oyE><|7tB]s>͚|7#ҘS3#S]{0⁰|YI{)^^R9P-`PR%b/\IA2:Su&%vacW"I:آKgpEhĜ=dhB4-Xeߟj2yěgֆRy {K^hBBK>hO5m϶z'KO(:GiO0bAzxecL{SF}Z>t=0넻u +JΥE1:^T_1!*"J1_ "\q`տ52ɓzUS_,stmɟb)-{/!i$COg|K#rBId|s6Pٖ?{Ʊ E~Yl\6Ycdxo8Rq,j8]UuuUuUu]p XOXzZ O}!?pMn>ꋏ%PPsI'72 m6DS1b\s$GkHZRylkSEg"7Z v#lodjAUs}\Q _S Es+ {n]%S)6^|sdDh,E 3Smc\ ĢL0񓆽HMbISPWFN[˙:t?]LA>ĆtoTh!TR4 )(l R"0QY{\K+@@1_-0d u#FuୋNHNJZ뙥88ᄠx}[I*RdM* b#1b׈t~rk[Z풡P!Mp۾bs"ED;Z.u"M `^FfQ%l ь'qBwXS-:oJ N- Gk3@N>Z;6/!P/~>͢?3k0T"oE"E͋$E\M EY*5)N9*w$pTh=a(r Ę[O9 t{G~7Z^ہz{nb$A} y|C:hMi}LbjEJבֿ-In@+glϡR?rHܶJ˽v)3!Zn.g^7SO3d:"6,b-3LW0i~Z Ύ{L> -J} P߰qJWTxb0") ~0 q50%Ȥ'iI}4T4&8Tݮh~n`Ybu&֜n#ќ}Tr XeXN5K.0,Kc03bI%YP>m WRs-fr^S ~Qa.ΜƎ9#V-`K)-*Ri:zu;qhypx&a AW"FR  (G:PbXxˢ#aW@_@x5[`(0EOU:׍yRqj$t7'=u t G1'u4I*k;#q9ȐFжi.E%El%EVpQ""}/,s^@F_u*W{&&-.4,A"XOxʝ$9"&1'AV$T|yrIHKd%a;bH䔒nBͷ).5NB3:(4Im5?-{@9 R'bFPƈG^1=`[d-J0x+pVSL:kUntEb;?V$weHK|9ɲ= D^!Ɯ |A,_6b{Q .&WCN ĒkN[,_*H?̦Q'6T)?fG!~,?B df[`ƂࢶP'# o i{HmKܤcLV"G ۲wQl_;Yw㌬m؆NzOVw_P,FW W:w$}@N=lE,M<NJ9 ROu$@5k*ꀋVf:^SrBĿk }~=kTNف4g:fs$uu?!ՇugP3U>fʓk]"ͦ >C;'P8>xy$}`.FsH.i: ԣf]z0̠$46!l>tc&ZFTR#3E(. t%%;:dS BN7tŢVk07Lf Q0,̞,.985-g?叶2՘ZfQZC=%BR_F׻ >̜ٕ 嵙:_,C{^p5zoFp/,lLؙ| rԸs7mZd0[ ͗3xW| Y,|LU}Z@ p`3w*X|wQC]11CN(wp D{6ӧo9ܿ3@ f0zek8c֨L²AA!+:ny!ZQr`oF5]Yy.iܒ]KgmMSxl6_{7\\F_TL䟽+k5<نapV#^)ͺɈ|1rБ) R?}pTNƓh)Mv]Sj$x}nO}vAZB}V`rx|c.p.7:T2g5)vL|`}Qv]ޓW>Z{ гA,&#ӻ&k%g˟>8{xe,8{h_%;K=8/Khxw`tHM8=Ywӹ=iï?ʨ(4L{)cGGA+':',D0 e5Om8 K\M]o1Uu >M*j·5[Hw>=.Ë $WFG'xěGSH`֔'~| Y[I>}  ]mݏz|ѵx<)A=oeSh46YbR.7S;ImO؉{3 ˙x1Mz,o,e:%cIyYzY01IBw]kmkߤs}UPm,' +I W˻o~vwȆ{'\^Qڍh?_Mxb.R!ט;H a0i.&tr={)h0= ޘp`OWC&,j(7ŭf%oZe*IQr͕F0 |eʖ^edEWTolUkOچJ$x1,Ac\瓔OdLINe9In["W#p 2*!.fi5F6Om@>v/FiE˧wMRbͽhjfa+4G*"fPDz+b pW3'`M+^GQ9ZLIJBbO^Df0(Pq\:| Jv0a#VE bɑG$F%޲v``t!w~~oqB)0&w.ieY+mhDi9VkX+h3T>Kո[CWPvk.4O#Do-\T ] L|n$P&lLQl"g4V+p,.DmC'F`<@-A X9l:奱smS Y A#.a4KOR:C%XzUS׫Gᫍ]n *dEzx!4{u* |C%-[^k5yt*qǫf  <EEi8%E9=(J{{9EQa((üx F(3f0OS_"/Knp,NqnwlȹbNdu=Y9KKKKE.ok z?w\sbJw̆5 pqW͵BkڧBGiy)Kp"m@!yG|Dl*-1xZ{w)C La$q#S42C6xmT[Pz\uij@5V#kz?f+cC`)ww ZER_;AM5.n(Zz ;A:y8>Y3#j[z}1`{h|{_'p'l6]ò9IfW嬄lg{ p>jU<~"$U+0!3d1?,U^,6rܟsV})ߕV9]P$ ?L[%~p2'c )[+tgQ{]}&j}Ntt[$;Kƕ&W-H'x>JF.686µR9OCP0!)t Ǜ6s^ѹM7VU%ItAt:Piw)bׯa:1+/+t󯏶~= | eZ$5imV1m,3V*_,=w%),(d %`g KBV1F)WkouFŷXZ" jU {wsg}O =idFj$<YBWiAYKv*xJ}^ڼ!k4Ubͫ.UݵpU8M#ګ{2)KJVPCnג%+T~ Ej.l X1VdWbp'wmmLe7gC JUv튓&mlndRERrm )iH2Cp:Ll}FwBxR_:WI`a0JYƥ )C_B> ӏ$!p7,M@GbvO}Mo[|w79"$WD*J QKQQrJS" Lq9c [k,GK6ezx[^._\X B}X]0YBU#rbܜ6lh#^G:ڈefe0Ӝ s8Fh/6 Rsҩx~*9}%2e>Ԇ.EAJv(2M9ɍ- Ʈ]J/9tUhg]]`U<*ܪ\^" ]`*xO: p [=$G V4X=he,8UL6ϰ$,pq:5VM]cM;h"ƪ"\1j~4xr/c=ZO&|TktT'-vQp%p1 7P.H3w{VT`-T9vDt` KVSb}Έɝ13l%>J7S96y [D,eYVayǔ[X=M⬰>nGqs,ӎ#yZX> HpQdj1.cJYThsucq,7yS%r;`:^1/l0C Pif@MQ1])5=P K5!K=J_ Xc7c^U6 ;PjK)^c`uhg%"8ʘvծ.~5.Wh!23`TT v99^r] MJU yzMp pm55pٷt>h8kz9ګP70}lgo?f^?y 'L&C{V}̜Pl,X_}Ԍ(f5qwwqqU6q y`F ŷ֞}̜pt@v30;H^g/>j6> yX=MQ3mMoc\-^|lET 2j-xQrȈ@gX'G̈́܏ڈķ3{8 Gmū{> f$*5p)F<2&ˏu@EfPٳ3/. 7-w^OOoVIXcVYj)J[v?é ^8%ߡb큽l%7lbۃx_ț ^¬Uk0'mVzii%nN%\{Ε]k;^P{W 9{֍Zj +ޮ+Rs篺#Gp1mJتR-KK:[%{q^.dzR˙RXV}B9eV{Cuvdzz%٥ mЯ >LKDZTJ%1BpޏX08y:uqE_% ffyM`} ~ۻY}ܜGV{hϳw7wwFu6˷C/ssO1vjGK{5TSg كOud2 [LsU/{1V;7N>V;=!h#qǖ;J؍d98][os>NR̳E6Acɫq>~]O$y 6n=Ou3diHNCz71G*@SZ]g݈B(o?Da'nL̝%S~ldhkjo2N#D6cA; mp*zPp쟪l $SKVi;UM%m>`m-ӧZ>A׺mTg3lg }?Uك McT |†Nw*i+vwJvN%EWӭ9U`m͏9u"O;݇STAD*4I櫃v-š ücEƣ, #ބ O$k|n@:+G*0V@ڄA# Ak Nq7>G*7aN| Ů"U Sԭ8@)V!itmBi+i7m운)ndw~MOu ׬ Y?em6.L6R Ntgë̹mpsFT1l%R+}drif8 UTiDM.k,=u;Ko;8lƘgϘ{55}HzYܥp?́y]t_+)fjTR /.T<]nS,2QvsvVg;c0;_$&Z80 :MFOǛ*BOYП+vWSO7Qj:J}*4_x.qw4$'JZtҚnl4l0 D[3jj#>8뻱GTeT MZNGk1'Q5L'Cxb6ݩ"I1i$ W"@ցonn͚nynxsgU(nL!nNn !Cq* ,No#n[>,2gܭ~ W`Ę0}͆7q9Φfb4wЌ nϳ:&F34frVAE8Rj}̆>eZJo AӨnd/a 7q΁2xuvqo&勧)p1׹il%0XP^ԅ,~}A{ bۋ1/#x|W~uC_ꏳޜ]V }}'<5ϫ47o7qgntjD4a^gu5+M+ SBy8Sk$-bZZM5QO TZQ!RLh,K`(1ᣥsF\hPnTgvSskrk.B3e:?~:˦Ţ W_&:}oi_NBET om6~,e`懓?Ea)R֠C{hzI+iE۫BE* NI#wa}Q0P koWc+?pԄ `)' ~5P]UEb(9#FTpGVieU`Œ5Z:((!bLg*\L͠+:oߓ/(4leʕQxeD;9#H},'QR:f>~^  _+jYrHV ^rl6 Yifv)sOji^s0AdXsa&tRb+WXp!=hDw!$pzr@-!11,2 sD]{XXn,r O^/kfHx,HKPd,>RXۿEtqyh@( sM0ZcraE^cX8IJ@UB%jO ATTqAjBd03ZGAkp9]k(ԃAt=LSph9#&w`P4ܛ^\T?H+~CGU?"./9DiI^_UIi2g8J2Y!3_UWWU# +KNd֖h1@fP[szre Z)` ?'u{ YR/~Z$,JnF bhSd|>2l("R&2xMdwSV& ʈV+$뭬Hv y7黋A(L)ԙV7aZKݟ&Xk?֔![w.h._b LJZQ#& (} ; 2 GĒ+\**u&|YrO\ԉ'fD#"D .rKTsc)<yn1Zw&A%@,'S[h)P lLcr@,-P$XM24 atvbBr)9^tP?N˝w{GGf(>jѻ4D{٢|ğ)BJLY6,s(PCT@ /OD 775!HVb@|rY*H I@j&qӜ ZD_ttv)rBNJ^>L{һ1*r 835ePV)!7@EՈY#; wLM?JFBbͅlBhZ"WX6ʹid"Z * "Bp-jM'v0&G:Mm3p.ܾH5 jm\ `l\4\U>ʕynQ7`u_;[y՞4$O4#AVk\G "LVvgv펰WgNsF2s6x0Ήpuvؘfnfe|-FUZdJ4 8"oyL"3n(d`%s[a+oA*H Rn 1F#<ûlW+r`4 yѫh":q!??5;owl꺳y[_Fv.Ƴ\]O~vw?{s4W!F[*?.^]fn48 &cqUPܖ_t9o=/gPWVU@ 3UMDx2b(CL,RH!Q%Go.>y1ؿ)7 \B4 mCA-OEL#R|k$ŧWðΎuwo_LΠ5گ@zk Y^Cay z:QV^}N7w/EM^%R):*FTaNX_\vemPom]1[FztL,F,AxcG@h48u>=J%Y`Y*`kE͖IWn8-\uA^ }Ȏ>T~\6Bʽʏu0N}:ԴT +,ɺIK'NPԞ ʶcUZHu_E?dܪZ.&fI"wz"LFd%$kaTH -! s:xj^6n_>3 m/>0Wa %5z 0!RtL2 9k|C)( "Ҩ҃Jۘzu(7`1_WE<( V eȊO :ZRy •20 An_IKJNj~$yUnó!%SyFI[b⊛ 02 bÕM\$۾HȜsloE-e~ƲF~<ŗ˔X=5F7}3Q%hT>m9ͤjn .nɲU1&/lnSK10ÄI!O9XfRn2@F8EbD]ͣl7S8^1@ N}}FN23UfR3,g}sTuYnp9![:׌zgb%VJCtVUszѰ-!ܩ2#W+8W/=Sq)qc-W^Jt R\1uTܧTݝX'D5ݠf,^+Ac*gQ^ZAY eh13c6"IPFUDP,`pU,cB(ETzgZPlWf0XG4gQڬ}LPd #r_^  VKFq'7y{y1V>Y_] s,ӆI9>px]_id ޅt ӱSf?}rn$Q% !{n.z,mc3eXxGO $[ަO޹ݦ/u038=-!Pn] iF4`cF}!v68_{F;ϮYwRޣ7pCJhȺJ<(h)R|%֗rO[-0bl784AuyJ lA屃784oqлYA!lKhRа~ϝ Y+RT򅰫3l 櫭iu\/nyr&ذ #ďoFwzA 1~8C>` w=kb9YRˌIʆe3Ok^.ԏxtxmS"0.7dN>-"{*T̩ Pt&PsOYn̜FxF|;RǟLfGZ~(!-jūϲ&a[&Wv6q?v:Nc_*1u3ǫ%Ys21<ţfeY 0dbģF;ޱ3 gmè Dcԉ')LUxZӰ?!ژ#V4jF~6%hE*YgєF˨%3p Ҁ'%A- ̟2-@vlˮmG<,JN7rgmqO:#sR7Ilb;Z̿7f\)@~,U ,un/Zx-^k e%x[Rx$cO7;hPєcaqZ@VpB pm%r-%@7m怽rψRQYC- 86Kۜ|[ F<KI 3&>Y}]6 i֥Jjx7V{ކ?۹cRΌ6w 4S>|1-ԧa6|nLcmj\`[mV/Z4t׳Z||c$\6Wx&:i˾^~+ڻNs)AK/BGʄ'u&߬JNOvE'f]I,Jw=܁3C tngPZkhs5v;eL x=}z|}kHe aY+ оϛ\ !1 Nuk$7q*snn}aN}yBُUfɛådձk(ac'Ξ`xl-?{}b‰~gtP!{f8MÑ!0LO^9gk>v.QyM-9K@H *<+gZ"b'c|8df>ؖ#˓Wlv5b-yFA03bŪb*F3GU3Z-Zj~ߢe;aC&n>a|]K[RyL- ʘfPR.g=M{?|c0 ]EWyYAAERwg>pT%,8~BAbh>JuXGAe:bvUuAV1gBp3vPCY4*xzq0g%@ 3V5OV떕ҶҕwdT4hǗS\m|>JW<*^5c/J:]4`J0@&Nyg% ,zFI3ww:|ꦮ]oT:лuQ w}BeF z:6M F7x tL'$դ\dW.nrR!rǕ 6 5zvН*'s?pRRt*b$2jIДH,>$T`42,8AT O8qN~?N.(ɬ$s\+*6qWkh!Y"m4TY"8ɔqr>Z`7ASy딩*0zιq2> шZ{T1P0@U!F.9H3Sb^v-^:=ېoQoϘJy}ШY %Qܧůp⊀ՎXbP7z#нw?O5e5e\ioܦi/~K#B7gkg=7!e`R7V(ʨVw/ظNN4nR쨙PG^aA,LZ٠R/7rURx$5I{Nlm|m>rhcw4¦yqTrt@{cyěMۦ'=Y=irk=5ӏiGb17~l gA&hpu?O~ґ|"'4Ř?-߽4#'³\ oKX"~/b`3e ,GW#M4FIHsC2L!#t"60(c`>.XRon)Pݵ&8jR/攕2r))!F(sKPN1>| LJ#%q![W d7F:f %8REmf")\9[zvQ r Bi``(?׫ ђIťpԕc)e^WQazD01u{Oyʅ7ӑ|"Z$S'lh7CE܎Jy#:cn26?}kj.$+-u8mQ GtJhݎE T[Ds[hLQ tY033D -Bn 2EZ wm)"MpmqFbKEqhYpvM`.$NAQިD؂+(Og6깬f3UY({^.OomG){iN~k.bSO">ae_}ifq#YkRXw]r[^&TD )z,c$ѭDI6ZZQ!gOb!MJch< o<? c_^0驱Y;$P=&@suַ"(%AgB駽Y _U웋ef (0F )V@)#@9j)+tr*,QfP[)%ZA&HU Tz#4PK$L)kIR9W;翗p3szP RH886\hLT*"2 ɂ`It4!&B+ R^iBE ግi|,lD~z%J+¼\qH|뮪(S,z\S兵R{x*H套.,0IeVp0QX*:|4$wZkQ3hBQ% μ mWBy~N}t^ީa)f(9zǴKiӏU|c_,gpI&8EN>6ȜS/)I~`wvSFAvLT?{;΋2 S4nճ% p̦Dޕ-[鏷mȧ˛;OL-Y|1)KӻXh<S6lZ;o>,/g=& +‰ۡ(%=17m0nytpp Nm9:T`>68k/u5hA=2Z"Hgl᧶z(;u^]cc:{ovݞm0n'lU͓͞-uZ+YsX.1(z@f[>@@Rɚi Ez :9hW|bEAg|7ӦQsL&ˌp/IdE&VQIEP)/)u1X`WQ4?o-ӣ2~_=X n1M[Y Y\ʱe 7=^ !93@`Tx*,&eTSo;=,!LN64 Eo5! F,{En#$+eJ ѪT7A7u݋AN6|yRyl&,.sHINmsfqtT_Xew_MZaߍNj&uˏc4m|6t̤d&]5ͤ PWLǥQZ9 )IYԫ45_CҮRIxtx;Y~BfN^NPE`ce7N@mj)2`YG{,GyA+Ijg^8zy64ϗf0\{ᒞtxzpUEDqթ[viw `xգ O}.JO{6Z>рu`:O:h5:)7y6gZf5.Kx|=W_(]joH h͙(jqZ֍{ǻvtVwX/яx1fl.փxī'`hc?A %@ @oQA[; H&G_I\.Cн|Fљt[|tb9W=9vwų85-]ϋTg=ZVrqy3_0 yDӣb̥vf|ė*d⯷i~vK! Wd}6Z|BuVZAv`/f}2Άka,8-0ƍ`oƎῪH Z\{Oaf)gF)nH%:Y}ʏfc]f8tn%g({wQ'[cqtY-HΘeѴڱ/6Dl a MpEDAQcؖ*@ֿ -hCoRs䱩?bdT[_DhnIMHߑ4ZzhG)ܷȔ3NU.XƂ:ΉKGMޕ6r+Emkd2/}x r1㌏dؒej( 2b*bU-5X+@_zr0ETO:2-"f:ɸ͹QDBTn;D@$-%rfܾAH>J-jh#-2a=A% EB)Z)lAHG0s`^Wj)4Y?`n5lo3ҼF[az_1ݳH4惒|Pr>(i "tR&@I.*bA;dmV$osj猱А:˅A=x4-Lab Qx}W13ҠxiKcU}oncߧ-9,i*ߗ1vV@> ה w vSL&cjע * )b>kL:,j acFrx82$,bmr6ؔ\JŘj+Ѩ~Z]KsВWGϓ|Vsss4,yr4܉vV{c!}:Vߥ48琯c}[7[iRsn(MLmro \1k6_ξ>fIɴiF_xviN8pU ǯH51I8A.s *ƙAΎFvVbXh@0`Z#'wyICUN!CE2|)Ɣi%/1^ύiJ Sk# fy(>Jg}+)  II7g1/:J w+Ki-މzkV*CRdW +ݬv`aEvii3LZ5FK1H ׂ)VSr2spJO=4خfߔ'2YƉ,G#VmXdoV¡٪U1gd[m꣹~KP̌;IțbY/.[qP~gW̧49S)ni'MK=ĮZo!1)_}/}P̖#Qީxj-/sY,B.%YޔWj&Z%5>UZ -gG]suU2s-)1#Oϗo|H{C4u>YS`oY܏ nܦ;u):r+?=AR]x3|9-,FdQO!|5/mq]j6V}?E&v/b*L dyȔ\~lO @ t6"-!Yd͑[/7 kњẏZ}mCeOgȵ}pѿ&P|K4j+icyi\u|?E]8͌鞏KxgXr#-<~O17_'ǣ|ivs.lmYyΞ)Fd/r#ߧQVDTc*o+<!j;%l+yӲm6ѐ2Wu U *B\rLYJ\O-'d=fw[bL80sAg^)hBC<q@gbdHv͗QF]!9.o2SœZ(V:\/rdoF錏^`{ z۵mih p -Jehjq CpA>,\ 22I@ m@g4}a+|y%%<mueMdҚ MԕcRUFq+="[ZD*yz" kCAn^Pͫ=HK>҇C5?-Eq8q[_2|ӣHykR`p|4؈^/>14:j1@` [0f`S:[4q|!yŜ-d)rn;۾O/]T{=Zь[HcQZo3ty% Y}`sI[E1%%Z6! ں"8M04\THfh;D`AA(ut#`qɃ߿s:'%/};qw,_pLK<;Xѹm KbC s,0! @a-?vTua9c_@\Ӕ_h\4*c܏qqvwAҀ/*zH&3&r2`V34$=&i$m43:]xAj'\?cAᅎqkYzYbP깺O؀[nU*0sur;:Ȕ䐂2Z9'u9gxAĨ9N]2؝F e4c%:Z(,uZs~ibI:& P:S[A{ OREU{2tHqؔ1рH{ĀD49X^03-{s_+{!;cC0üط+˧h]Tn֚iDpa%&k u)Xe}`1YØpH[L!f䞠!N{wƒn=t%IWt/m _Tg^x>$LE+dy2pF"*7]g:)[gC}Ԙ\zW1ֻJ[6'OޥoY38nife'~SD7[sv!drVJөS,c$2z᭖C&\04ƔZ0^ 4ĕ*Ivgcһ'=0$pZPc66|7Ǧ+lƜ!b}P]!LoO|Oj+HGINUHQ,7:9E_6ZȒJۯB<>^3l _=HDkB1H!t D!#'Kv@{{A *6ky+a(}^#4=C!Q ulnUկ1q%0Av4ڱBH0%k%ZX[9$U:Si]1&i]l% hims1'D+k 3HiIhm4PQΊOw'շ$]>~:~q=m͛ѝ~x?lK%mywwCTWᏓ> +c? M1/'G3ڲ1#M3OyZQT 0q`-jd7wD)evAi$KyY>~*R3E~iVYy;?k`JF'/bu 1gн˖nn.͛XO]1v;魥[nR% TK,Yd.UH=Ӳ "2w^H|nh1,mThuĀN0kkn6/yPUdoN:Iʮn劢d^\8"je:UJ;ht7R^]؀y'~wUwV ϣAi},x\5uiS?#cn[UvfO~>н]%E,w[%2* ~ sLۜ ΦALmZǕ\lolP/ww~Q*'EKV4ي>샋uWF0ًۘ~{TUt8(Jw+)Qc +WMG B3׾;`p  @qhA#$,|u`:"9Xo!]]ab(]וּxT}gpUX*(4f$kc^3-!Z@y5Xp{*B65ERCY6rE5c[ #a,BO+1D,Yԧo/y+O?_’eX2>"ՠ,YhM{7X9#7ˏ\WӉZ*R!){ؐ.||nJV\ONJoک/=W6^eq?+M(Hć/~v v?T/٭-_Z7Z lW6L@ 42r8ꎚUfr07J̀#:4enǀjNh2Ubsby)B*, *sV{Z{+Ƌ7H\- +ͤ6m4iksbq[*I8xT(~TuZsѓHxtQI 1PUEv61t]vUݱ'w̞ oFbFj~A )ÈtT"t <ܘ;M48E$Aa]e)s:"s& '%AXf'p|ΏK"lCX!njĈ7XZ& .B9-I\0`}c9Ж$6@F-Җ٪׶- @Iq- }מU(v ;6 Z ~S !1Eb3/@z# o"5GD~j]f`+P>T*OAUr9}⡠ۑnҐ/w5 b& G]  '\[qr]P=Ź ' 2RԘtR"ͺ#A#)B:%Qx%)4l~)J` ̀cTbs5O@Ta甈dNU2getT%ӊ|a. rp~U_~)FcmOV8iW: u{%ʗm+N039?oyĭE-GMۡ"\^6%`> aOImd6Y2@JNjo:ρxA:oh[%W@'$X.%Qg'4: [{(ṈĩY,Zـ` \:hS· l@j;#7&Ċ>R3z>8D X aT*C*P`. FD"|oP4Xl~Q ļ Eؐ` 2ecj4 :lq4i<ȌNZxl?ay|濂Q7WW1 oA*ZDOGK{?~JKģUI.ܹr 5{"ʝGo5}$̻7sdϱ6:O d`A -A#(ӨDjǩbr5j>pi"1Iyu^5(2* ay"A`$p)[OڐW3n,m {ޙ!ա'VWFRXT¶gzhsOiO1|E/Dz ru Es;Ϭ_r\; jR T7ANbPoŘ[4*~4/ 01S/h~!(ꢹ~O a ))PU;cP)Zk0C- "qE4?W,x~;z?UUn+/o!CPn䫯AӲh>y8쏟,q (foi6jQZZ-4匡Hʰ .9@,Ӣ:җhmoIfhw K.5Ϣ;[{q)#48Sj 3L[$W@Q|GTO4KB1;gOCs;Ӈ_kS 9-oIC){KtpA 2m¶zb݉-[wbKzNlOwԎ bxNzzp._FGp4󜾰P_y|np.60T+$ ~kג v6qPkW褝fvaK"$޾0I^ TsG2[W`qѭi#XP݉uXEY~zGw՟ЬE2vzݛ_j:1WO_E$|<@K:\]t</9}č ʀHҽWGDTD, :6QԻrN㺭[B1\]*$-Q)`d/A*k V(Ĉ&!Hrj/[ ϣϓ4QesxN}/DK+@ɼ f VKssp $4ۺJd\EnUDوA\b'EN2`- ə"0 e '$Y Ȱ*O=i4g/ QQܢ "|FBh+)_ (>1o1p)̜+^/ٝ a4@!tٚugX\SDi-%l,R u)EΖf"VSj72KP+Snj~n4 Xyz9s:6Jf< !, bݷ9-wa@-o/CJہ> Ҧbnd "JG<_fIU@6r>jYPFޫۜm6: ^:%W mnfEhYݳทCDLUzgyJ+Yȗ<J)2q{QTys}ɶ`ÄCS#^筗Yd(|1\#h@2ɂj r( f .1R/M+p8f@폞SVpŃz`RTVIݘ$7f)fx3:ԯR";??z|^^rdAPvv:2~7Q85ql^owGH㸃Onr-WtI21{)j2g]9#$1sDZL lU ̼foa18eGA`@}}2+y*6*Do+s@)·:@#tVJ~ 4 C0}# 誮ayŭ3#ȸg@< $lj;*XP%^yݭq/+`l5qHcuEZF'tC9ᣏVG#4Q݉S eBI(L.)4&-0K_mn!S bp~n!:_RS֔p*p䅝14@IhCf[k sEmi|A^#ټ$;{@p/nۜ͠:-\ӊ_Pxzo Pm=B5v驳rZulc^ #pڍۘ8I&҅ͥ1P8 4p _>H/3L=tT`DT Ie,.ʒt)g5^:dC܀J;q=hsO] 1vqQ1|>]䵗 fFVlZWJՊhAWVo_^;d! P:euHẃb@yUTUU?Sx=Ųz \:SQ6 0۟MZ3ަa|`/fqNlճ5t ף0$+f-Z8K>ho]NѴC0#Hx%[?cg,i4Ċbxiс2eeِ2>mw-(H9hI*Pypxa 7n` `v]I_djJ{e$0ڑ*Z nxj f k2!^7y{1U6Eh6Z~?F*d)qme,1US[ yGxŨKaȯ.;)g椏mpH9uZ U8vю!GheBë-H#N-3mhaÂys|ԡ\{=<Ud5(Z!Օ ƽVh6Yv>!.I]9tJ-m $g(z8oվT :5Cdy6mQ9qF4a65ǎ7\26p(r2M[V9$CI+ *ָ7=>깵~k2J5Ts1ԹPgFpkkp݉f81z@q҇L3VUJ߭qqV3bk(ëw5C:H dEZ*tt^[wkûY9&@$~`^4+P]FFƱdFrh TJmmn"[ꖄ}YrSL?3FwEARŌyH̡JHwIFָ1pCŲXu'*2a=nZtuB1?q~ q kn%p^טcڪ]8ZDXDB^]V:o{{Bڷ2=u JϨ̂uiA:f:f:H¸S݁2kd陫{Ze׋YCW:;hډ9̛O؞0ȳbJOK 7,-쀣r\vGG8:ضGuvދ$eo`u9aXgk6 0$:Xcbɐ C&X2Q>gޭb4h!Re朷wk~瘪N O20`5|R)4%N4FԞ%Y=+'f.5BWq@a30ĒXmcǀޞ4CHHS:݋Th 5 e ^lCal^ArhN{]8|6A'-mXj{/s~.D[p9YP90oj2iIڡoXŪWǮϯw}6-u@@ol'H6nI*McQjJc-q(iݧ_Qgpw W_F;1k-7k pnkj!rGnϬ};2kŢOO;hQ#qXԦ}I4[ԍ%)얄VMb@&{ wT(_yL5ߘ]|T1T.EE ^\Y-ml/oc3J84=z8KRIj$H5˗h2#x/Xэsbl'->?d|'F'b~OKa9@tͿ".~ʿw@L<2_pcMT%řh;3/8[sziIѸh/?ݚt<<P1"s?r{'n/⇽'4;.?^-V5sHr$Y"ב6W-m8|Jhޡ-û1FϿOS8;t,@tHA駛DOX+qH^{iP?hGi8p܉&~bOi Dfα~՟/4F_Y!TkqX2)_W肽9=g:dC[cF͊x9~;.;Ga>bUhC1#Y4%DVM?~< =+ȈMbᏽ}4h˽뭔ZTޟ&Vz>yd?`c(BJx1ѭlU֢Fȣ68 P ƘdP<(S}Yx7K|=>`XJ;A8TEhy܃is&JAӇC1SL&G3&y|Lōut&Ǜq*Ou%Ե6[W7yp*YYMTQx^BUrer\ +kKj J8LJbY@n-SZĠ]<뜉"//j4I|PWȬ#VUREW hu- ihsN:&ݐxntx_K­1v ~;lC@hTb0`oLVba( %d6îS<=0O;lNk~%ԎkG:ω<(!QEM(S:w^߾o6||moe0FnK'4gLr1_wN" !ΧGjzl%R6~mҨM_ȳ>T(,2ߠ\͝#w. Cc8@׏]F΍7RBI }},y\Ԛu !$]w=m$ X{D~?nt 7awuP?!~Cc HU#2.ǻ^k)+#͐s3 y?7C/7C.Kȍ" "(hI)u˘Zi%$U,u%u_:4fI,DVqop^Y\<i$d#Ih4cia5it2V;CZkä@&5{4r/wX+"9P"[ 8^ќA$K1W%v9;#O5͉bz<~O:Ĥ,+K::Ue-ldcOϥv|@˜|WUY;.5t9A ,b>ӚQ|](E*{h!}! MׯeHy!'l#_0YQSyB ZP׽_䗭x `jb}~76CVxʯ _@ɫO^˵Fŀ}1ym;۱c哸0l1)%vn6\|%!h^'56mnA_?f J<^v'"?X|.< !<] uz}j%uz^@0慐=*$S/lߺv{χmX=~7Ew0"bW~q`V|5Vէ׽^oLu!|.3}~c׿s?'>dYsä"K Ú/K\R FϱINc,*^QnHP>$l䟞0- '8Iakf1$BuW~́k>k:Z/}mB lf0G@8JD{jTCUN i"i/hLLJIHJbRmvAGj^8PKI\\|?|SEHSq7$ݯKs` olG <L7F0UD`%™|lIPlCe0%'>* o`mTֽai֦hr҃3 wR@%W, QPX$( nS'N`k7X{qX{3[XP# r5ղU#3=֖(\g}X 3j`m%s.̞9u28IA9SP9 Z,S bzĝu_A hP.Fh*fZ6RxrKu13 =64X T/g+D喑$5: qLS`:r|p B!-Ezy0%&2:GZVyr;F9kmQ9k+fs-5W'`r f`VuSjWƖփi2o%v4u5OlaVPKbt5,)s#`kYL5̪QXXihU&^f(36C)TcXxfT&" ҌM/ q͘$|>7nU|mUrccܨ`GŒDm8-KA"Pئ%%&+cb ;U^:HI)+ T8P|x=JVL崶Ŕv r6`(gr6X%AYV4}ej)y$7%Y11l(ȱM@Zǭ*+8G.5ZD}TWKSA{lKHce,bT-JU:}'siƽׂI@uEzmHuG/+7rϱ%RdfR(m̳4ϪsRfH6I֒6#3UhESV*MoV}6`* ^fuK4#8F)Dh¹vn-Pԡ0DմZW3)`ѷq9F멵%m2޸D`D\|XQ !JM| |;7YY߼)(siu)7֔lԂe5j4WȚG͵yQ3TlA ߫F)#INY1QWy?Q)`ͥ29qarxQAj\Be&ea֗q^`PӠws.tl֟UgE>χ mWq`Ǡڋ`.C wzSKV8VD=q8g–y׍+)4BrJKX5aø9g$-WG:(F&4悙H*Γl*!&qL φqgi1)u3^kw3:dXR\|j8=nb 9e`t%|cNK!]A1; qE+0IljKqX9qK֚1Cz{Ic>h(n}wPNT$& 5ktӄIF[.Mah,4pBOawxS)m 6pWwl­RF!E*h=J TέOW"RM)<(]*h˖eߙ\eW^f|:;pmL!Y\+ NI&Ix&T2E1#Zf+0\F6 )OΩ m@lb@{B  O/~[Q,9r X&`d5P|=(Q,?,"=?LnڢW^}I`h+Tz$ Oߥ)¨d L~AɔdrB4)!Q4A8 i"ɭ&.7Z$@[э$o6"Nbp.GnĚ'pF7+ۯM^O3@VddXPa5ϥwǭ25}|\ڡԝ|(ξ2r|L F0QQYov W@?xYJ< as`IJ*\ rh*Ĺ#[3q7VXڷF<Zwwģ +0b(uLa.%DJϽH&JU=1c-ގ,e`x"C?MV']z9S'j?NJ/VBÛ,7͎Sjiev~Q0 ,!Ihy bhյxB)ӥiž`oje+эǀ n:( 3h-\Y 2 AkDFI"`PEXdS@ˢFDc~5a0`g;(d}v~;FF+;;HO~/C-_v~.JNv6v \23$9 eWi>t.z6í%)0IEIe`b&ʩ\\mJ)DnOny!.ɱE #[d"89囇~h\O_G~7οpJ=ۃآU /^W^Q_-[dC7;_={)w)ɖf1~ /~ϋ  J^_֕ p jQĚ_r^]*z-U̲_}鮧MP3hbL;%1ja 1B<孌'5oBdNr͗D~!x 3 Cprkr"z <]2kmϊڠ]+˰U\XŐT.]I9@ExgKSm&f_NtdMiv_4/MJ5p]W9 q]stTk(a&)hbv¨~_4:`aMKռ.(Ng/<9s"gaBo ]޷!me2Ot~Tֽntb:ֽnֽnӭ2&Se=˧֠cʩCc89Bz/B*gp8~&.w˼xznIFd`:0:xjw`b朜8aQƭDLa/Ivt~7{}7{}7x2;饸Z.& DWYu]Hu.O(n6!>/JM2֫@fuO}2S/8}< 񭜒}zձˮ\NOڠStk/9Ek{;?ͯ )iT<)hfݓ7ɒ_Y\yjN.p}RDoQ&m/^^^^Lz[4s9;~|jl=_1pN8WiS l t`Szd0*hukMrM)vuV\C!0F'V MȲxP__uFWoڦ)pOB&[EicZ [5q\ eRZXX? (ԊbfWw( 62)T9n&F.)m`8-s~zln]"eIO'myZX; xv$ԠCW`]e:G, ygn ϐ䛗FX^4Ρ&Wun`Br*X1F,E*b{t,(v $˧Ar9bW+q:(1[0U'^qL )"FyNx=T|Xj*q"]%#nl-/3[tM)0s[;E~9q>?s$$;!ӵ*%*dd',QS$Q)%Xӱن/)"A9,mi[4x2UhGQZR]`b &ƒj, PBYwVm"6 I"A`өn EOIOgo?=]1oH>Y9($ Sk$#D"_*bY1vBX)2HAdF+Iۙ[yc=SYt#%Fם>~|UGxj Sܑ8uE Q 36'E+hXec&Ŧ"E(bu7=hFY{|&YzszeFwN L^KٚN]vɖ`%A{TuVe$\2rXKR&cf*:%Wr0F6Vsۢ#fcKc'O^>~bڽyOy5k )rst:t2* xL.Ƥ\j׸#h4%ZN -Ac 1{孧IdĂ:%;@ x.\]Wi7)%IF27HVCƐQ *cKdģCSa1nhH/Oz8}6fUq&O+IԱs:щ0pvJ@ӠěttP,y@a.UT0h#1lU,ƅy ؈y%n6c=SJͳ*G'a^N.jWeĶ Eŧr?<{4C߾{_Cxh^+zޡ:P݅55l%/g =<1Iгjqh΃p6b]s-gښJsf ÃCӯTVOs 2jXڜFk nҶ}mZnH-c"58xCxQ\Lm~ʳ9ԁAHo\|s>^VAdLe BV%A- U( .zƍ+BΏ^-soTۻx'] c0UV.z>& 3W$s=|; 9JK5+%ch7Y"ӊ]PhZ90/l&E[n)CojS_Gj%;w'=xII=d`ftInj3c⤡dt$ӺR0뤏Z:n+Bg]T@`[!1?Xc@uS #=rm%af(_Sbs:b]tH]*f~hG5% P*L .ײ<.bc0 M"DO!f. K2NRLT"gPYeP>C'5pS)5]J Vb 4[SeGӐ26gc[b.'$ I4|E 8g6CεcTFV08P(yO^>Y܀ ]_}(nc"،HD^kj̉vYfZd}2<^w$PnQ\zQ1)h.r}[(! r5=K*ofaހO+6j X >1D#VĵD S>+*:\PjQ@"[%r]b+@WbN {Il%}`3!k4RheD#%WT: !ƄU#8Pٳm՚B -Kn#>t´BRz@/`ey4Mni G *W="6`TBET*FZzghmѻmբJsP禹쁏oU9!0sP!"jC0ۿF8Ff*jiJ#-bA aMiB֕[V.ѧ[f͕f @b[a0H}Jh2dAf`ZJ%X#*6`*!]Q^%LhҤn[&=Fۂ&W2iC|~@&YN굕x!JIPr15Ř=dT%TJKX)](AirXghmj uyn&J%sse`]r`:f,O,D=!J%{ !Z<[钫 D`Eܡ@0fxKNUb:5K_xG)M( goK ?dzMM\k&dzx3g55q7w{~Lb6蘛f] ebjNLlO^+F|A.Fqc]O=/b{1ub֗sԩ`vL,GhJ %AQ)UO)FRgɀ3ʹv.%,#N P.ދ}ӒԣS#j72B@g¹;HI^5 b* *^/% Sze 1DHZpĞx (`5Y .qaD=)pFtY5lb UvEZN 4ǭܜJVmL<*CYvoRh*L7aY(Y*`jNKOf!pPZ7+r LoMc`1lE+uzw=-]Wmpہ7"d{.3(oXthFfɮYEvɾ_eZdl}]){ ޝWK$@<;ZYnF^|= q;80@ka";"=D! ͤ6g>EwOήկUmUNGyrpݳM?#r2;`S^},E{W]yߍڐq{#3 t5ˎ \ZI[5i1812Q|'Lr's7$yQ|zicro{#r p*PU\l: vɖjC״~v]~dr}1m>?mVi]6ܲՅ\Ӄs8zKn9j-EvA^>78w~GJYn&YYZWx{lj>x^F<~ (:[!g5Aqz%pT7XxGt9iwVw9Mc0~ZͬICoАl$ϨY%IMNdk@miXʲn.R<%^?[V ͡諤 kAݺPsUDjQijPkJ}*2y=>?=jj[2xɅ#z9D6&op^~|}Z/Jͻ4xe/7}rD9.qY|.ru:ZX7Y<.DՅm} &^58a8`6*[{.%ZGh-vSnmi":mQGj2vk2Qu!!_Rʩzj{ ڭ- RD;h#F=,Ծ[։ڭ EtGhLamȡ>o&).CY_ſujF]>4;۾ wgȳU2>X&bR X5vG$+&Vݪ^*Bl=,DHt_ {:"a)eǥHOUv\ROժEZإ1;|0ɬ, ~D8aSIpIPTBzb&>%=<\bn)!pIE>9 q\$<4_$Wb:6Vȹ\L+E) w8qrBzjf!m+kFŹX`~e (Bz+'u;. H()xwd4XIk+1%E DnDiUɜ# oZs)U4<ݣ(5 S6UT; tse*#Zs"GQO32"B\{ڿ#Rd?YDpՄPR.ש%hr q "(Yșp9ASUN2FrmhϚCphi(x=_@K]x=ߖ)wmQz~@KtO|h*Nn8QA}G6 O$p6\jekDS[h=}V8vkKi:QR Z#ݺ\DwdJ'ʔh8V:?ZJ ؽ_|o lyXigjFM]P[#XEɽ='#mAsݱ4*x~o4WPn^̮#V*%$oK%iŔr{# D[Jybnj+Ȕ(T9},)K>%֞žp'11|7H^E/>9åc OHuIcض%7';ַۏ{AWL/b~6r6ssmN6#~rכ#yZ;=PAvKȊ |MbLz"{{F&ྉ~!fg( bV\Wy`ڣ`}: w\BwmBP!x?@Bu.- ǡoH׸~ɘ}鱄SWLjqF  %XZ,:P_EoJz&RRalY1NJ4csa>OOefy{Ϣc{qA\-xGv6\O)ozOgANw$taI).|\~K&;,e2jk;2>=#;gOH$}D`+9&ѽ|-ݿy~]kЈ'@;*J-Tī ąɛz{]] ?Bo:7-=2&oGYKu4ƚy&rCj!JO"V ,0>LL 出#~GWL-t xRvIE7Afd8`|̉  p+PiRKeYd2X|({=?w,JI(U_oA[V~!Qd4p}x8UA4V2o9s4Vhj_Ly(BOSH$7ywm=XyyiUCAҋ`v7AC*ON:_ud[6]eėĮ#E0qU(!4թc5zBT˪6}9_Q覿<5o\i;ñ+k5T pZ\T@STc+h;b#T(iR&Rrc4˿x#)NL)]tΕt"Œ0] %si҈s."EcɕSƎWw*.c6[1p%gŜ8rw7opQ։g (NdeOF uoǤ,یzͬ WWZ(uʘ4>^Dž#B"0ȣܔD 9#=CJPoC"wHlaY_o2BGyx~H`.o'q^zHtyS1US_ yv}9&*# .Ul^+aSn$HlpuT͗=!}zFy8EC:/M<$ܢKw_Qv0&-J5 %BR_^pb"}Үxy0kW~#v5#WW}x7_"}HGP{n3#8 _1VB]S</HKP6p\|Anf`MW noA[#WqW yۇQ4!TM66rZ4CWɌ;b]jJy|L ZrmQvR4PNWcbOīH ?4\I?/Y&yqK)yI\OlQBuE4CIz9X,"LDfiEJ ǎ!Um>nOA{No]Bȍqb8 f4=Os7{ TNʒ+wj!H^*B3 <'Dykn ޗ8÷!C^0[ˬcT푁Bo7Gh֫#ϱSQ 7'OŃqS }ȦǁcREHvU J=.Yne$>d JhҟБv>$> (B)UFs) 8cvHsy8@p!d?ݥVHUm l */"trYn,'Utm(*mHvԒp}m+6Y.MfMdF|u{$q,Y4ug+)֕$YN׫&r"r/=s.γ42qE9H\VmUQ,iiUK]J+SCePj$RJB$G%3g &$ǽs C?U\,-@)e997T$& Xc J+' -_e(SBu8"Y/]O*H8-2CT$ ˣOxipvo ]`†z`ܪâ_'9į=( $%]];~G|Q%?g8\c/c!=*=lAL^.k\L;~)qYW`g܄[ Nz;V`|%SCAI WX8n]JV gZdJ8H3YhbR (AaA^8KPb [ l]15ȶSldJ{ỳ5:w"*Ft,}$0#W$𶺘ųCU. ~n&h}[HrSlgΓXkёćCϐ"ܚ`HLGyUIJ#MqI:Wؕ6B'RL#Eup'nj1X˸y< 8/A d%q .pdBNs_qq)+f.lit|^ Q,p8>f$2ugg4A ) [ӧ`žQT7`̻ UGu`9P=A"ш@7#[ ":) ytnV~[|VF!5VJơ_##t4PAj/%g1#B23x+6Kl&jB%~l$B~ԏU˕{S,hגW\c'$ 2q"MIk#$?MD˸hV⯁ cn_peտ57 ܕintty1?-3%.% |Մ .e}|]U [ {u0%T&@HeW ؏OM#p$Z>M ]$; [Ӂ(oV`#ry-QRz%(b鞦qg(.frp 8LU]gufz95fQA8(s--"iQA)k;  f:[ؗٲSVT@rhIvgE:#<)XF5x٬1_ڵ\qpjw[D'knŮKI[)h:㘤I1͕MJHHCMZ춟< 9KjvPq(i"c9$Y3Ԋ@"Tp&ays,eϬ4`.\0rJ9iQ71J-Tj0'i:Lb6LP;YÂjC5catXQdv\r; xžX]3s1>3sy=`E$VKɘ&Y)RdJUib:JBLI c5zi'fb^I]On~XMNj.tmf'_֛gNַY_4m/@ #ouv5.{uSj鈚Ģڪq8/ǝ(Dsdcʻom{`–ddLXd*VDz\z)i]rIm >+%lyiJuNTEk)#qgT qĞ0 46@~Jˠ;p4/g0| SN,[^.HLKBH_pS>Oй$}i~[!c 6/n8l>4"#>ZɁR(eg,y^(Bj$iӲ*i ##mn5.ޑ+Mh1JςJ-4J,HoSENj-t  }1AXs!sqtGT by!Phph8{]cJ%m.kϾvba#Va+L PkunC@uYJ^x/ҽ]ա?rb!ze˾[p"ocˉg  T*֗@h[So}=~2b⪼E#~/`W꾕xg.XO oADJqjj EvduG$]K&Eǻzt#WCwM!9L R6o.L`_r޻Ȅ&C81୷Kz.WAⷆWBfjb5ϑV3'ڱ":`JE13`"jdqC(&TG,:N3܏%2 ^[b; F|=\v<*UC_RޣyΈBϔ1~'v)fBfԲ[&?z/ ߯GAk{{#Q)S-)uK%DGaQD} E~^?`w ufM=54sTIpLSY\ueQNUf#-Xizvg x n^-'ylϺwl)^,HYfe{}:<;u7i!}4Z5/|X!n.wfsumK˷j@{XOw܊Y[g~6|*~3o ^N퇗l&6^c?Z~cpDt=毛/,a|fu_>X~8]^iawhW?Q}scZ&*(r^iٿtdž,lBݰ~4o?{۶}-R|z7&/k&0C TկZr4~)Viz u$>9$g|qrtzx_[D>;8dgϝa_8/`a[IHu6V?~moѽb?$q{\!]K> ڗT4Yz"kfaŖJ%Fs[IqjR!S f)e*eT M)cDT%<&iLR&RII9w/%!KR*!).1v¬;{zm>L{3{^eHv9Y? ] fT^ϫώW((H g|25[.T4^ՋT}' Y'iҕY&qMؽOcp twju̹D4CFe x;mw [e̋jI>'cyqh¡=T/W m`92D 8POEp-5Y 4;\pu5 { շ9Q ɢ8> I/2>ST1is-z밽@q^39heZ| x {egCe4[^7[iH<{mIdv0m]%#!c/Ov l7/w[uRYE*?$K~$%7p[Yx,x[9Oq=P*=ۑzg$O9b4O;pu*o8)üT)%YvRLzLi݂ID KIMӷar5.pbVFT{o١Ín ~Kwa1(^g]c~JGkDХ媪gOض*fn!OL:bQf>T!Q_ZHaWdƌAs]r˦Ӯw~x h/ޮQ5G/l|%!',OTXJDD:h1z>;p!q,,](#CEcAX@111}kGW~<*IFqCHڗT#P߆)1. ta֑2IM&@2>87ZK]k$"!l?Кmܸm&HǷC F!Cߏ>d9r2״z,]3?;q$Ihg c'3'cn:ir*PE!`װ\9\/ֺ~/eڄV% W"F> 1틐Dja׻`PԸ%nĭ|GcݠHVɻ&aIp+?5a-ZCs{_Eu1y^}{)M/Zߡ}Ye -U(_ƃ-,\؏@>aB7RJ8L~E.S ֏8jX$\_0p0}+IS@-N7d,(3} )Ul-b d)`h/"8P4ORQGN٩K&Y M};މOi #}.fZ+ocJZFoBMJ1%I.p~AޟrMCKT_*S|xy.-d}:<!4p&GB$4tqj6' 7A@S9J}U0`VX,'i;R_(b$HM`|JY15$ {LgTQ瘷4{fԚ #e;bGir78wglip~7i1}R@i(L&Zq~8wigI _GgIfO 9HD%jD12oaq<s~{uޛݝ㽖W@ Gdw]6fp,Zܛ<ˍ7:|d .N^G,uFƙ >9s䜵+Sq1 F|FCsoހ5D? /PO &χ159Ï#t{{CbpCVs 3c.&>/X*'JrBOD |O(KeYF~dR\'T0d")2PPPh H8cpdD8 !A$()@]7-0^xK̪u#gz)/@ V!~]^Jn+~Ӏ ~YJLNFC)U\O`~tz&7Á-Zl;iUZ0f)D\4 4Ɲ:{yO]w{[knv^kmYtsFFM3SgWM3)zWeydY6|gy-ǬaQKGǔTsh  =]jm9>9jam9=xǹ-@;6򀊫W>hX]pn./ jaȳ}z궸H|'.2i8,RבBCbQaCnG#B&B&1)1$tm=s$D BġQ bi}YkTusm^ts3o=W=u8|%ɐq{:.]z{*ƑJhL0"P/5ga1e[uoxP.݃6_:yA0m#ڍ, E Et%D4T8JB(Zȕ7vL:ɇ*ݽoC)DU2_~W#3(W, 3oRԤR1̢sG+Y+F`6#i.Fb텈8L4'q`<1KNGHGսDSCF `Ɯ>73nDdqHJB jE+X{^uqzFa#P0G~`0HDup1i6%&qmQ,TBM(f})X/p F;y)L>TcTmЗrD#"eMbLD(% 3L"`[đĠŘ +ARո&C$h-nMfGZ^qfU+*}b6"vDѯutfg?9N :;'r !%!2f/D G۸ ̄Q ,y۝ŷ0s\@lqo`" Ɛ(?oe$ޛg-'Y^J)|U͙Cp '-IGY7eM<c.a;J޾Pm NvqryV j:U|{ʚ&ۜK^N _a^o";Z&=[]+z7A8SAEvGxVQ)^57޺MS•iM%fߤ )@)1,2RE!SY`&IAЅ BwemH(;Q4 wϼDoL4ikZM(""UEbU֗/@,;瞨QP,vb8@VJm62iÑB7&2$:E!#w :GOfE1Kek>: &`$(FQTFeDS L1;=1dѥ>Id:2PiKxM2(ɬ+"<}P-{L F\@Iu{N #ntfFJpfq16]Z@/LZ#z RYX9]f臢ѻΛQin(.ol,χJ^@j0*p,z>9wlJcgaF69Cf`L]L)A5^FhĐʾX  Z n6A29fkyRɂdL)Br2A`!lK.C\jFM ei DyӰW&:BkCXoK릆X.e jiȑnv_&|oQ]PXJE؆alo 0aQhtyX6e>' |:&7z ]o\S) K4ck (?ZoVCd?*σdemd;'&.*ܾ&6 o_gKIㆄ򵰼=3w=Psxy7ub-R 06 J-#Wz!*C'WO>@t+@Ns<3y ggXWPpN6>5e*ݬUnv/}{hrV>HkʐϯiԔSudG?P)Myy`hqϯRlkr՘!G?2@F)Q;:Oᮕk>Uy1Hsn4LCKwSނeFFt[*=2ާ`[Y&G/ ,Mry|"־:l8Ӓ|E;KÄg o0Y{*m26:9wf2~SXn~5zTX؊,fhEO5*NbFqrRpz{)AP[ '_:$q ;jY(7qXɃ9ۛih!KW_9zHL͇򕳽j@_~Va  =YZd耾[5?ŷ⽜WOŁAҘS%\}qٷW]br\vOJupS{>~}-F?z|fRp}dTkt6EB PZ;d\&$v7t@c4474 Z\}GXI/j.*iM^[]lkTeEc"HBLƩJSzdY^(tZk^٥CYe{16da7fݡT凶xYRoTm>qU\z]Ŷ3&@zWGGԭ?ƚh=oslO8M^J#x/sG3 Ue2N6PǪb2X ~Tɽ-ǥ |RWJ5qR\)<jJv_jp*1zpG*l[zqA !Nty z* ?/  *oY~zԄ=wkPLaN-ΕS0LIT %Ϳ~lF ɯz erbf:O )K.J=ͫUK+s TJ xDeiإWN krHmա4Q{IhnSSne;݀(>=D/?g ^/*%i{9(Ȼ6v\++G_k HX3[okzkEd&GM"3p̕xtYaq e Ik[o{vOYV5EVH+'=\}3-gcV&*aJ/@XRD:B]DpN4:d 9Zڗ£*KC%BEK& D,sS Q,3j2q(0c;Sloz ,QxE |Ni0Q㈬L@6MjV;iܩ2>R@f&Od*ԝ3CT[L={)O|Ƥ5#~}]RͧtI{ʷ-ǛůIqАodVUT vH]Ϫ`Tp`z`wZn/%cz6;`Qa{b`^\kaJ&3wn*S'X7tԎW]6}6AߜQB>zs(Nd,O ?3C9';j)c,N^{D1 J^pnQ|\_>_~]³x~ M]e'rݻ7ovo/ţߖ7Bv7^^ّ/_ez;lxy^Y]?J?.̈Vrajyhg'"|{&-`@j6yAF$F읈!܋l5CA(.i$#FŹw69΀=шwli* CAb= sLЭ=I/FZ :[ä>̢6)pVr/yPo _ ,,u32&KW̴uxO1':Ho50A R6X$wc<4,=9 ˖8XTS`!_ixA9)P,TI5=r/Jۇ+m[xv-GZ\}SzIp,RirjxwoNovNi.{CX77ˇ{|zqAyZFh1Mg3fRzuE;d+jV>}v!2R+m&}MP캧b<:ߝ_ZiņG4;J}7gޔiw76Aoks.C[Y[y^^Gb i""ې rP~`cR.: -6酖>v&%ERL B#\ֹ`Lfၫl< 5 =L YKfc.1ym&xZ&1edJ8G1JFqQa̜e9"ټvޟ/j<",rF3,h7ZsY%VQO:S9YCz J[*8Oy[n䊥VR L!znamd`)Fr D.D"PXBF`2)r2{@de>yii/8j NZ s3vcX J!}ZO,粚_4Dg}k)F/ :2:J@m6/$!EPzo,.5r(m.۱Zk邨A"*] "xϵ>[J*]40%PFԁrNflͽ@Kv|LIt,hTZZ؝mMQt_{A[)&!ע30Y, @\/4RŲoMܑC"b-{ziMԠ>@&D;e2=\}wE;6+t%H9h-Hm  PIGpa(,FaLM9hWp%#)t\ni1w#I1uꄝ8_W 8r5-;J1O>&U{ve7ʌ[J!Ou~^EJESqG/$H:Ӊar0~oM?}0 нH;W@NixQL )EPCauBu'=X GwUu&K^ƹRqNƥbDWBJPq,3\9UZ͜e©ZrʑMɫGO"LpQ [ٺ%5H9o-0޼Y|4~6Er?ӝ\gL* Cޢ_*@{V=(#yބ9  DQ Pi9ي#7 [}ŃIW^<<&mxwTDk+IǙ8E7ɊOzLE|hb ПTcMU`nz1Uf\ðcw ɤ6j4hw"E{*}h# "~쥷\/sd".7ۛU,)dJpD!=}'>_ϵ{iQ,}R?m/k*a&屴 %7Wz?6UY%D:~&mB|;Ln5R4^w/˺S (Ճ Gcrr&ߧhp'Ik_ۻb"_#vzĭך*k0yE,z: /&ྷ2fƓE)"r6epKdٯjL}MIKW#dVc9Z@[IwijQZ}=}dL>X8۹Y\G'Ń^0[̆6U̞ %Mv@ ]=K#Zgom>8B Rvn*VN=s&kНm=u9K2sZrwӥ&LF?iB%`x`s$=C}vJ $TH*?T uzit+gRclwhFaTëiT9E5%f(nAmyjb4F"]%:^r)b=3{;̓vi%1SlKKRۣ=fpL+xTlTv & A:./5r&O&rRe,NUlǩ޷:4Lug2.0! k[MBϧ|y|6v|P<~d INll-"k#{uZ9w}u^Zvx|f,%Jyj!W>D+)N:ݱng>uЩ2%hϺU~QWV!z򘢌mg! tûS6 LqhבO% a[7+3Bh^V뤧ɷu &7֖B]/ׂ :-ĎQl͘yc!C܏8!en7Y2|<`]a]"K 1qs<~IH ÜHoͤ6 h 7FNJ)L80epbry~zي8gr4s?-tRK3.ZIRH p9ܦn?1K gC=⻼2{e#k_'/$ z4ߛ6ԛN2Nw@]ǭLq.zKs[o&~2vl8驱m2v 5wh;}H ' VbQ樑zv1'dj6DXS1쌡"_a*B4$4%5٭ɔ6c|/@^3Zsj D͈߶Vh_|[?K{ecU[ "3tZ"djKkEx #4Rk((.ŷ/F]qۿ[#\3ܷx Xݕrk -icQJ^J=. %fd|K6|o_pyaF, N}ӓf[Wv7wGY;;-7}"2n*)8&ف!,FFH:Q&cۈX(INiY92!RtB2%QȤ`9#[h`%>gƇURd%?G),d" $J7(\ڟ ^Ovb=1L=;ad1it@ :8@*M+1cg3_Ɵ?Z|׮0eSQI r)vUwyˋqQXY6@6/RSyۆކ]cE:6Bʗ{-Jnt"HaK5 Foms’UBg۳[O11fcq|쇟a=ش1`=2ۦ{U1`yUw60_5 ;&Y0#C=B1ϗ4G*FG1ҙz};-{iS;L tAuAᴵt YZ~"b.hn2k0f C툾 # 慑 +}*[(+60j8B&2!8!vlϲ!ȧCјspb#黽O^[{Da|&\9@0ESN0w,O5ʕyw {[2@MW;$?l.ߩ,FRƸ1~7N<ːc}-*@>yGi t捃sYV@7  ,װftw.1^MVֹ&*z,ps]kR1j*Qjly*sk*.<"nʛYy(Oj;ɯ[WgPC-P'S$,ߺV>@8։pѱj q%%^\1Y9p(^nQ8qCv;FFfB0RMw$\'l@Rq'JhAmUk)Vdx+G%贾\(jCXID LHHܷq~yI)!R(VgA%t IùyjR=mHLҕ/Yz9؉$W>BFX#eDH&*HT_s*HMƒlB1O(6lJLJ᝟zR,;h֥ȖǂЋg2g^pm'f6bolUhݠMܱy?63§uSe lNO.oS{`s=>cvooM<3Tjd_d+ٷƜ*L%5>xqwU-;*ᢒ4AU9$̾gݘ`I WG-RDYjxd¥A-a'shbi ҳDŽ59'=66{lG>]fM},_޿x̧N{0#f3̆w&#ie9WT^hS@_E{_%<0uä=5ŗo}s?wU3 &j6N ;CNn3vɄ~)Bpýo3VC4/co<->G)%iu[-#bG/㨨ۙ.ą r`ny C\#ġSX? I(TF1NK Q#"DU8&h""DJh$E)L7x}8L2~`G:(CbKIi3)p㞛*,BO@Jh,ȍ <v &'SM;VJdF0{L+cc<@bvLEe~&0,nT`AGi8W K&QJDzPٽ+/ bm'='!A0sela:5*J'1mKܧͩfJ%6ll}V#wG 3L4rO# 0D~n3V\1{](GT7tST,*[:nMu@fG> 8?8dm-ɴ )"a`7@j#Yy6i+Wr|4E ftb'd=V+5T|n\}b7nCW7Q=o"NDPt2MRL cнf$2ͫcrMzkЎx>_֔-ݟ$u-U_$sc$E?NV gmvvBY5l6Ӽ(_zN+$C]F::ka+-aQf}tfoәy:dF*u=,".4Iae *[s%Qe!9hn#8@ qS9MKb8(vrZ QpLs/B1Q .#M%E⸵*C}IAPjY')asUa #pIĸ(A4e)3嘂)̇RGVsۗE `nfK2ۓ#5(ܔTL0)V s!`)DV9bE&3:۾]S `ԩ;n[S*4/Ln gUt* a]J4@ AR0#ݟ.URlrVX(n3Az+CohD3.dp=uB$c3#CSul׊9|3u+7^RN<42f$cL^hiNf`¥LhSØ(Jg~xrN S8ah(( I44`XK8h^0''PBvZD%e)~KQbv93*s ނ^HN)Rwâi R s I) 6 %d20Lj'l: J0o`? LQLhԆն0Pnl= ZZn[0rVOOݢ 8~d!}돏?_~;?.. Hpyqը*Bο];.SɎw!w nBa@LiFF(A=n&$&)P%&v,z1mo83|w^RIBr@:O\Qc53xtXEty Ds fd18hw ̯^C6|U(2g{sg0&3ͨ^q7b?xtT=M < 7y]{Qf=+]U<+M:GdMwвsWIfm gUxO/ۊ[jwѵ]x"dړw3 ZJ^2fR~jz_% NE3xۯg'ɬ+'=wd-(zc4E pu #Rt/ϸp^JݥN NvDֶRj:-|a ;MMjLLg7X=8).f-:ܪR|{trN'6۩` U {ͷXL?q8 ʅ)07[?!@ _͖F :5jr<T k=B4hɟ><ܻÓBvTo ݂V:=jX+iS8E݃L]^N?rzΒU mb3-Npz c$a&JI*[$W#ް.q-բ }ђPD3>D7$z#:?ƅ@D?ZҢ_N8 ս%Eid=G/R.|xǸ)c܂SU-e4 j͋h\^x&"ֺx?nSy 0a{λA];옑-=F΋Ff#mv_ÏY9ʪ0Y01Pf9KEA%ɔbYpTPɤOLT^@t-LדIܚR&w~J~JPu `eJtR _.adWs'8X$ǘ ݖqg˗\RqsPYRd,t2PFTxjBboފ*l}Wqjw2ͱɿ{XD-Q(;ӎ\F'+p" &d/l|E c$P̚=]9!ٽRy"*LuF=^MD4x&%<ަ a}bn]wVȮ>ttt՗?W aհ=9/g~>WŃĕ'OkkP| tsxM =<|=mWwImP٫a"m JssӽY/FG:|URmCTluF+9awIT b FTXT @-У_FLRE O>'#nD<ʊ•xN:>>j!.lt)ǜ C[3 (=&j J%SK"@~Yq>S4'[+%Ѣl~.$xK_S׻,jE_vb9TцCq}~ˌ c)q:OB fEP5[R =tjAAĈ vfDw'j};}5\L/իOѩ!|G!F.LBvE|Oߝ΋O&cb{V*Xk[1 U/9C?!ЌQ_VЛ[YU7wzے}>*LpU+}˧{'^3J)UKxwG [@\2ac[=|hD CR6cA awu*d_O CYA O+N܆4Dt'9^SnN?SnNHykb7A`m,Bjp+9VED,x$5]Aˮ㻢T$k:Q/{WxC d:(i1g6}J~7|e.8-%|4*3<'s\s62=!gR\,!)WrąiYRs F˵F4%2iOZ 3řj@pzL㢽$F5`kWU?$hgU }TWhqm.BȩpuӒ Y 9Uj5|udp7JR0~zhMOMO6u7 1%%8b闕! 0θf Nkir5ot5Evƴ]7"-kpzPA`fQ)'Z B`CPaZpocn̫&%Og~6߶Nyx ܻ_/|Ee`z\|oLP#-C9Ze 7(/?t'ۼ mbHj^R"$ 6!ݥL2MӬK10&ps@hcm/0.,W(ٙ;گ+DA)ys"ϱ.J߂=*cp{$D(TŴ8K}.P^*8XG1y!2In]l'5(yMkB(Xs%HR.U@e}*Y紛nr-o'<΋ ^SH40$ݒ}"̗iE jc "rSuxW\")gqj)o읕ݲ|:g^[E #zy{(傰 _DjpUcquVu{men\պT=Kz-GHS;xTWL~SH,͖ϜfOA3ń<3KKL5NbD0UGL;樜?4 ơkPT+i3R3#"vvi_>{"B_8nCȑ to=o8Ѿ}kh4ǿ,'Ÿdƫk8K?H[*.|iuTD[P<ṲAs0|DLK.527!G*#n6d _Rb ܪTn6)&L3j»x0#gV3y& 65}~?&͜7s._>dVBGYVG)4)}E_,|M(Od*$ixE`0b9fk e*5z2!HW)fd>G0猒 }{@seI߮22'Ӊ*Vt&lۍʋ6W5e -Q5-*+*R\fs%x͋.U ť|ה/Rs;8O0 3@Iړ˫8ZT2]/)8dwlz'غ:ޫW.R3a6u' XOgt5F;l2/bysŷSLUgKʞ#kj9Bn$HE xBI^u6m̆@e: M#Zjn WX&FU|>,I lLm"RT݃[}<ЬFuWJ DS ! h,N #``F)yYP@ M ^gVYm-uDaG:_+pk*ň >R@} )Jp ݈fm yHb)Mv3].!1ܩDf2܌ndA%AU:|u-f)mNpߍ?iA9@R=[Qpm#R*or-*y%+(:k*Dt` kZ0ΌoqP޳ rVݶ N `h%E%>K0?Ҧt/">D˙ɥ2gĺDrLE6zڭwUU&|ZTFR.$u@%m[$eYldZُ\BB#EFЌV 1$&vHъ2SI+zf4R4ؒ4pSB[6[l@vh26@>N e ;`RiqFmm˜T6X1Nx "e\GFs&ɭ4GFm|K[?KƧ4O jQ1D_}DfH8`sZq-sW,^&V%lw-^Q$b&/V2>?!|LU\>mɑDenz/%`{|,qptޥ֏.xMV^Voc|xW X vdO%ȀP94Ly+CI.#bX-s,f=UN0Fcxp &Hgq'-y (AZcvfSMD6C!y ]%FiؗI(;0ȶ a-Ժ?yITc9\GeHxCf+)?(k=֥;~?MA}?q>cvzRe׶%9/!x:[KBW8IW_w*%Lb2,CB|66|RNt̊ěC\)S˵6Dv@*lh&ӟtUa.9OǃRHPpF;sܾF0VZצ@09|d<U"$SKn3BtЈsZr/i%)Z#% ݈rK8s;B˵62g\T1F"qY!>eJD2^]{F"<9{heocmR_ xte.־ 1X fEە.Zy H V PIR|`]av[}1+UL B"&ruZ˕nU)W3aCYP.,xL 18 ^Wc `һ%y*}# { <H+4CRR`cш[00AM2EE(8qNLd4 >]:*y GӇ1"XȦxD*E$m3#mKnI+B9iv8/~2zO;v|JnقRj2喭:A \mpq#?k=aibqtOGwttOGw]9D82{na&MA*r0 QJ- oDDBReN0HrWviWv]2Kdͬt 8[(8V,HeG5ɂKMztF,tq? hr@1܍=N _8o07qɬݕ?x9W_Ә>i?O/e^jw'N/{ (9SXld0$*(dhu0އy(\ÊHQTaC/U+1^<ޗ?uk Rkq/,V5_  ɼ.]T*i`UlK6)5D<<7uS hL2H'7ʍQȀ6-8/aX" OS0ac`9>AX"Gn~Cp;qeK6Z >a4odVa%Vl*@&2KLřR EZSۀ:7yΞs0/kSP)š=Y)as9ZIIj86T 'py$8zCQ°pSR(˂IXEQRbi ÕҀ,@, Y`R-r|SAq˗Ao?؅/~m}gޯx*gxq׏Q.cv;EwDB+[uoGw3LTY F XcjgݽñR*~!v1rlmT7<˘18Su )XeFH~4Le.i^`r%HJfx -1e YU"r9inގvrY"HH4wn{^)iwUĜIb˼QuW=f U SA!c%pnZTN|PkS}e_Xcwt^_9aVYCʼS7A"ƚ&qWx\ d&YD$03I 6cP)x5 Z0UD. 3fh*3'x5=lV_M܇-. A k,o :e=#ZMpEP;AKνݬ=@!WB"uutd6*jϨ^7h A /w!ᖐq";O!x7j^H,n8 9zݮvDTS0dG4g>ݻ,2Bpv 3ywYݔ%cF[1^QgNQ+1{;-7X*B9.=7whQХ\y Ns.Ɓ-E`T4Ԩ fc.<)~ޣXNto`Ƙޣ޶Eba_J0X@R@oA(yZKۼ-A=f4Ҩ!r+.`ĘA; 6n AKE2^o>|>uC%ʸR jݙ+ } x 1֙Xm~x`ۅ=5*Lgbb0"ؐT$*@ \԰FT*TOqƫ~\D`*xy TKMA/I{g GT"/`=>hci)ly7\D}cnMhv\shUm;.m[pIw-,Po`I֌W}+9s8enx7 9!4'3ycx t||0a!>}hlmM7=QBz(bp[zIs-3.,Wa:+t0{'Y@~F&Zi]V.^L{*uPU: ՒJa*v+W.U}Qg$c B6#&3ő^TVBeDr*IPʙ2UX2F0*IM2^hO\.iT H/yg>ȞO?~|nyA o[}hb~OD"W̓_k/jϟ}L6ip5]<>}1?Ӈok%/nd#/Wty5>k\k"E—@Q9]I_7RAW%XҾj5Х.tvt?.~GpJA׏ݍ=H ^xZYgrҞGh2ک Q+Xv{T#bi-=Y{ƁZfT0G'A,A031WŪ-yUbc?q#!%fbƒ}g]>=O_o#g@՗J YZP +K L"ٽuWҊU՛2>fKEŻGeUl * 'MfrZ1iYL/z-Ёx}i }djG)6|c$),T/U#IX?筨KM;>o޶KGAp0g9P<܂DG|ރ;1n Zk lֿ<7γk\y`=9hS=ﭶsٶl<_Z{k&lD j)8!ZE4fjrX1 R9fÚ d@!"I,Yޥ E\a눅9ܙ.sL3fagYYZӲPK2B0,*2g&ltC6]p 4NK1ŪMaBʲ4a)MIJd .$؁AT?g>ZmwѥC_/ζ_yAN^O/8!)QK[d=qW~6@|ckD(^ _kqpC'iP8%5:.;9w'=&CQ1 ESnO N(~U>>8HLe!j{H΢}FwfOOlmωZvP1摡mCl ou+TC娭@ϕ)u yソ+OUN{_~w ˜FqĞ*+'/o?`>z zG68!9yHHGe\rR c XLcČ;5ú|.R, L|tRQ&Et fusN ZRt+\oL7O?oŐUQl-892dJd~wLYn5^Q!ؽr pc0}q|.WO>D;8.g6w8veEK7&K@|]_uoaD,ױZ/;%=u 6Yߝ PQ{FB#iytE wBzP++Kk Wȫx$AX$S%t$mW]׼vrflWh߀E0y[ q\rC=[3Vc ^>L;f9n 7a\n';F'0S9vQ]bAxd %4=S+: -hu퇍]2AN؁BȒzOR0JFrWX-nfJpՕUMrGWpR`D$IY(ũ,NS zL7nɶ׍ bGq6|ccNqn6*`ۭ͆OGyQ53G$}<>L WN~S~(r+" /+Cp#o2wSAI E3滃1cJiS+W>+"QD[|  B*`3Ky_l^fC*@kùO/=ٯ³2O򒓄%"/ӒWФ5)96h$'Zmu̶yE S:OIK|=s[ /" b2 H" =ZNHTC5CaOF\WzTՄbݗ K%W;L9LP"cЍe 氚5@ 47ս'.Tk.dBFI+EN q A,hm `L$AKy(ƈ:>%ldlm$cBKb0`2MY9fIaD*(&YWbZ76_i&P#(gDcX􇑋qQO,s 9;i bsv,Dg[OWp s" ̒cÝ 5ecN"hKUPg4V^eLrY&r>3x <+ZQTh4ā# SǖPea8 ؜Aˡ@ ccëcwѮdaH#5h ɱ/k[2?;o8иZstvFp kOEjRQ$݋fw)Ij'kbzF$\2R|#XWϖs!dT:-U3a;>eg7,=eM(D ]v40$j".l}~KM)cl\LPT0CiqqZB}BDW\ꓷWEv3ut. ]h ?8p|N[F 'mkڅSQGPWM3> 1`}z`Gѕb!G$dW7O)=OrC-ic *aOXA%Sw:rU(@h BZb]yŨ#|3`˥Q C \ϬzTPĔq?(REX-9TFRK 7.Za:s5(ՒjeFDJԀ$Ѵ|e@A:/HOVrJxT y"b+bӊXx[ׅOEFt1YZj.zꚮ~<$ѳ|Zf.l a1 q?ҢZE^6)E|0JVӑڶЬkš Atq! Z|٥e)ROCR\Fx~H0vXΥؐ+4YٝUr_N痣R7"~=cќ: Dw ?;#/)j1El~)$ m_i= > 7yLpcdi ? Ѻ}LOAkU@nG@pU0smuL4҇^IpDI Q:L;%Ϗ3/14UYwޏS;@Mkglm\ʃP{g/aQWm$TNyRA`h%b6UKi;ճOJF|P xK}/;V犻WÐWț8͝wuRƸ`̙\zg_FWӹE2W]ng4ۨr8L3/a" 7lz/I5?3p=z /^ѻoS)Zohnv>U eի͙ \b^./Y R%oАYvzˣ*Uj:vd!  P7/p o ||믿n%=^uͫ3JU\V pܕV'N;,q*y _iYDCi(%KL3 O*I> .JCOg?8zn\•i9Њp0X*i$a`)A%xJ\$ TXhJ𜪨:֒5ANr{4^"Z( T98G!Ts4{#uJkp5bѨ]4Zv-G ?}Bo<9zv,w4q3cAa0 ͂ƴւ+˧`yԆQx;SБdU߹r - D<N !CgYeq8D],G$'#tZyjg@Csm"w~r*C ruI*5i%zvP~$F!Fv#C 4jpuL!}jtsWn Hp 4Fg{fH%掚s :QBxa-HT(#jvH'韗m^-[d y!߇ӡ,lZ+m *YY~8|*|a{@\(Lxk&d=EP-03jQ".mF찾۳q}ER>/>0&(IiilFB7r'ˢF@<Q>{K' W8LPrI*ϥcCJFtSRm^oHjSO[\7aC| e;o(_jhbɽ|  9e)} :t}HB-" ѱ!RK"&N,9!~V)F*Q炒" ŨI`)jNT:EkDCCZ[~D4pi$lh5I*ׇvBp(F0.$cp)ZmHT#ƊzD15DG7P3sOgT~>6${>SE0z]qi ӓ%̒ά5Pn&M֑k d5&]eX'M}Rۘ oːkfv".&mIŅࠔl.; dGĈY**퉭H9iEIkpA23ЊiF3RiV)cS O֚ +KOю_]=_q1}w=?~j_4,q{_}3ӱoOy/{ѾYc#Llܖ*-Uv[TnKѕ~ ^z0~5&!I j(=*T K'^, FRrPt=4Y%8H_'< kfp59ѫ^,/4z%,@X wp/dDJ]P=k d8׎*b1DE-Rrܩ%#+N)˼A%Fkm.yIP|o\lUpZhw%h kzFU``Zhĭ \r/ eyH+r7ۯwo}L/^YV B65݃_Ag'0W&/v<\-z.z{="*Nf%dz; -|ΐ j"`p]|!3Y ;HWY<Ѭ:P5d&8hjt DG?J>ߔ1Y#WhK gV's 4Fyb?'}qdo!^^MA4ZvLݬ/7\Ϳ5yE6TiZtQ<*AT9Z΃(P3sVE-#/w%PyӨL8,FNYhSzkv6 Sy[by33g4 #p.`$LAc![FJ49S^)gOAeK5WߞO$(m65X)Q}sDB" @5aVV7ȉ vf0q餏 S.Hh pVz{zxzf~fg9ٻ&v:]~.%dbdr:%ͤBs:&fhu)*qE鬑 _vgGs9L\%IΊ7FK㏈ ߾l*s~Tϕ˕_h*qYJH!HLK?d JIgK%8.ٿKD~NDr2&&\\d&Q!3`aA+)Thd~NT+ϖi,.*ل F Nq0 OXBah:#qs([=YWF!oɔ WwA8z唙F`uưwM."@ђ$+'|I#YN?n;EaLwWBtf8ww޵md"e`%bY,Axyk'-uLSUHER[@uN:u.8 _O_/xʊnAx|7=l`#})hzS3~gٟ֛~aOKs/js(5(BtUTӆVjH;U>Fqanjh`91Jp\@_}U*[~:zCia+men RU JߌiocK$#%ce`RJE1)@vAY SaԔ+$i YwhZٗ\^v6=McX=::'QCoJeS(0LTtU!+J?P2ۜl"ȒO= 4z@N V%rlp_Fgsѹ}zqم8=ѹNwF—lqQly\ :`c.C2I* 쏮<=xk=0cd'l!ϓg7.nᙸS8h=yk 3;Ӌ}7<v$Σn\K%Neb ֙gB W9S:1Zp5'Du*"g= qtRnj0zj<e67#oCt{EH q,pRLzVRE)UIH@-Q" cL,Ʊ,SY` TZDY;8C#$hЈx6C qh-)A-+^ ЂSRUnv%F #|E&g3wvNһUA/F(uP^aFb(-1\ wtBű(`kdjc]([$(OU'&Bbhc5^K,P 7/e6+W9!܂&5w;u_mD僚w?ܖ,du7{xMpvl(iAǏocZu989u ˩:F T!nͅu,^*5g0 n|p ""rJU]d 9؜U>)DKj˯cY>%~&O7V"d oC#ˁJ:˲9X+eJv[a>h~Iv].PT(.N(MV~cx]p ܊o=Ev_=>gD뽰' -ʻ%ʕwyCh|1xP_+ n7k3K1[Wv$/|]W&/ ֊x8V2BPtiW05Fo_[w^IW%??9CQuվ2)P(^1jTl/_NiRQz'6f^mQ?!/f4M@c{ީw)&g,X[ ;F?KO .IH Ff=|;DvX|2to|Fa[fptrzLu8mzgoyH3DHgHo9tPu/n3fE !iwwTN3FZw#9|= "0-zĜ@ACnچ/L 0*>-Vvdd?=~,r7v:^mqu{w_=|[nflw<–bZ786 |`S,ZZ|f|rb(=ms|U?;bbF#r0avh۰N"i;Ж✰:wЬc{<3[##;ow{ܰFϝFP:N'N~L8uRwpp`r>}պ:U3kt{c1%H\u !q ְ~*Qo_azBJjh55R7|LRsME C=Y7Yl3?ݘvcn`}$əIt,Aq!AS0DIs!,p&Rba%4݌s1ޢ?N>VEEzEo'gz yZ YmI˦淬Myy cjX|uB=z={$C N8m.h&2Ö*xt-Mu(thNxwV!C: 65q'W4ZcZ׳IZ_D҃m?_/&'EBpg%:SnFDzX/'<è}ds<+r6,6,u?9Lve?&b22Lzaj?sWg  Y{*d c q+.,+kٕ,YT/f3\4O&si-~?$_=,&qNLfX!݅fɾ,)F=fWØ14QNM,fY>cIgq.$Q$ G.~A+0*Q;ZGj h9 3U8 3 (:uѧfj[**L Ӄ뷏Wٍ{UWNԜJ3CD^b{O51z–CP2,͏TK~pʾ\9Wd겳"WS4~WW#;ּZcΧqJe~<92nRDo͛=_ǴGbl/We(&F]p纫Y{:=սx`pGe ΃1v]ޛ jOJ ?L휻)5Da'=BZeg-*TS*xMfO5awyC]vkI ARCF)IՖ8ͥIl Vg4&L!rT xE6Y,I9˕H]P"I"#?wda^;ɪHozt:;l S7LD {"D_"9 ᶽWCj( i)!Ia+ǧ.pRżW}lЮ!CNА:ZĖYinV3z@Y~^RF;qq}c21΋FtuQKL:V#sd!!?ɐ``; ̃Q9ǓϭoWԚT7ן߰G;>lQIz٠gZu3͇B&)CRP؟BʼnF1)בH4(r))Ҕ'LIU;#I5"f|a8IG| ^>l񘓻_8r,{BG$C6^r//*ͯە&NBcKNQP(r=)O^DnHӼkgѝw'SH6S~2 8SLXhomĩȏJ [-yvY]vʾR/^.Gkd(i)(e1F,W ǔ&iEi"ab0 >#r={u. XjUf %]鶫pWu~0P0'CrTckݔGzv?$_=,&NL$!bӃ8mN9Ck7k$XqPV0g.bR[Es̕.DL+RJ|[' ;@lͩ"nY Ae{)\F0C{JkmnVE==Ǵ;|pm74=NdxȒ"NL uuŋc~H,Q xv<BvKhvSs9ae9$vsHǕMh/S¶G" Q +O؎%CX'Y:R$Uk@4=V^|p`Ug_A0?oQC_<8A'N/͔P5"v{}od5eByO<~ @Y M'e?4=vmPۮ#siIY~dn\CF8<7] \cƄTՙYEw"iFMx4smZ! {ycbQza'1IiOGuO'EKTSUt͢mI* ;*}'K9ywbC޹ Mn1 Fz\?jыvy_ε$hAwo06   u%ٿ8|; @H۷<[F#њ !FUvt0?<yu%/u'X̂5H{QNՇc\ET GSNp|6Z^ ﵬQt {qWnGPt0:900vX(;\~Gvgv8twBRvBxuyJH PXjDzĻ|sˏ^|svs9[M7^z<;y,;W7gg7?_i_^>8Qﻛ?&_]gۘ7g|)C?oā%jO? >?tG}ݽ翽F(u Z~~sڊu9 iN=9^^2^?6u ]LJN~:V6uY K",Rz t[.rEkxu7$ݧ^wo7ۏKAwoRG׃^~A hx|?wu]}?p|n/o.>@mGG?uj剒=Ϣd?T& okc~c ;?_fE\FK𞼩3{9< !oLmxr(- T!]؝{6ߍA'?_;~s4ݸQ.n=pV,QlyV| V0LH{CjI?ẇ'LrA&W+Shh?ߋo9L9X (~h|fqVË9eH<85w0J*Leo}O i[ѧp&\ 0=3SL$(Zٗme.T?rPˢ?Z0l .2PKч):OLlo (23NlDQfE Aい)nhwNӎ,g@S*S|Hy>p{xtxl?7+`:Q3ab[r)SVKɈ1rL^`|snb{9 S wN /Ph&΁x3`OpQ*yāS2Oz]i<(7^e:!MRg og5\ _PRg1vJqy0t آ~XN˖<\0<0yp:r7& ^-$$ulѷ<~tD-k2t=F) w1e8Q~,H}b_{X}"mw-RK2ϱOy}ǵ֔14#iV.)*rm eIv @ZgXƫ0ח/a 2SsK^}QۨnGG7߆z|tcBl(L?"FqEg! \spW:e\m^!A#=Y}`\̼晾,h wiAdQr|iCӓ] W(aT`nxVӐ fo?^ru{iV$]?^~jQ̈AՓUBuP&(VyS-҄gҵAiF|o[UL5YkNٹ ~e_ }2X§W޴MSBD;ȬC!G{鐥nKUʕ5)z%A; %OJbkLlbR*7/<}l >.%)\DuɛDH4%'' d i7I#[Qi:sˠ/f]3kQ*,o7~q|Gxka '!҆i3iv)Jhbb?6^&fksF YXVrPZҳ$q]),+W6uC)C1 }%u('>SG2w _s^y `sr/hз*ޱ7ZX7jR}naj[$,}?DۍkOi.tϽd.#+#^N+C:ClUZa#, (0ggUiJ_e:.?Nt(ſMK*n_˦=JnqY(Z M\Iӂ Zͤ'bpyzJ_y1=IS҅Eg l vA RDhD<+NYW>Z>l Mc W-I].WQ[+)IU:U/p4L8FTD׮~]DOZOEÌ:# 9趸-@C6H%hZ1IROthjCĦ|$c|c^ቭݷ`5Uc/UU4%pXpU1860ѥ+d\\bR/fe\2.A=ȦR58Wy7ρRpQpè~7TL]qLV}'YwK;J NqXTL(.ҥ#+sGLNǥUs$:桔$ET`{0"x%u &P/ ,=@qAsP`QV}vv1 8979phpP4wrUNC<9)O⢳/,Ě.Yt|S-q L\sq%cͥ f@]*5IHf^ D5g^`Z.H`h, _:2``6&R="4DCui+"l`rQf!~دP52PŅpҙU܊!vl!;VƢ"2!RP,LD!6deÚ2U^1V5^f*oC!WR\y*oúcLM[~2yvq9)*=a&6yЁٳ90cwq:~.2EݻWԮRg-t/HPqU{_!}`u0fQϻ&|0at6ʈ+\%ܹkVNL7 v .mXGgy@^fh+V-u8hЃWjô;_̱2A ZXqql/O.ZZ\@I *lBmTKlYZї FYSA+ NMԒ<!0Nfn+dC(VЪqʄU:Rʉ*୕;6Sۼk\iݿn9$C0-z\!:h8Ft4BKHv6HI7f <7NS 4XE ƺ9 d7fOEסmӳT.p=2@ zt#xr%Кzt:ېmL O בּsKdi"{h[b$xJ+o6%+ f6.%)|Ka2F ase߆oՂd+HI &^PlzBmRgG ru0taii(|cY39f`146w+y^)*:X_m#'"D^`IHb,!wlw7t ;b|7wą5=-U>}@GL4U@1Ih}c|`tQe똌ʥ4V'Wx(}FJʪW//2X9p+)굴sndmi=[O: f e)6MjC܆`%M0ȇY_ڦ *ڳM)Y17x^7yAG;fF6;όCbń]|CTemX-gj{8_!!{PB| B\S$CRvFUPJC  6{jzjYhŷx"k]6,#`tKM@&S(}34K/F^ǥn/:fBjK{7 ~CtŖßC>N=f\äL@=&fW\Xk*Aa$;y9Jҝx+f.qnoO|| #Hɉ5ӏ\N{esr= F O?s1 B#wmL_FOWIe|3L7o^^^=Λo޿xWos}~"}/9:~m2e1]?;na|JC vۜ4*L.v2 a\0GI-JTxnZ{>e!d:t4\ȷzg{yrߍSG I8\v.Y\'{ C:gc:L# duiHЛ~ w?OZd,=^t{;MD Ӑ. I՟A[|=5cc"Hqﮇ/ϣhtɪoPbSpK0~s1F.= ,їiY 1Kk<~9M:>ֽhP>5Wll2gE*$Xp"Z*"xIƔCgһh"E}/1:`@_,YaA PQ*;DUDoC"{MlRG;* INOAC墿Ejgc<;˽w&V̈́b( %2.pJ#fsRcpD,ZzGzg4D4ĸdW9ZT3E#52Zm dZ[ ^9Dj oAZܙ\fGSE!/rJXK {燅!lB|13q['NNrғN!B@dD"x6 |A1l MeHC::VJYR.,xb9R J\@8:ղ#Fv~ISWcDi'uRi' ~x=I$sOQ-_@gX )#Zid2=̈́ ƑƑet&6ʐJ!z=#vU6; !wXŮN:Yn$7R,!@5D/o &(U,0捵@5&q@oVw\G<8sNgii1ԃ!:IH; fkYdRH( BXj"VB7OPikQUnDDDDVfukh'; :?^]?EȔ&c֮HxG1u#T eL }#<܁ `CD'~)A;H!Ѥ(Vr5jdg,D %8(Ex\l4L@H3/5p 0ג2$U)# Ax")ɪ6E%Cwsn 2(=G<+Z -;x_7g_~ZJFUn?M^\H4t1$|ͻKЯ"G |]0W@ykD\Ti"<=- j!US48V,k8/Ӌh\| .+bt&jǑ`C: BF.e ')u[,4[a Um, ۥ.5F &y,MӐ4אbB/#pUq HX-#"̑]'|wNd5kI"tJ`)`0_r1xbm+bbt=6A] tZa؛8#++u6P|8n9J678zp%x@4 |B ?su1S% n,@G"UZ"!JNG>@M MWom aJzfj*VEP4$7TBU6ВKWhPnf9mƐhEH!BjB (i0-xw.h@z/eZ &N'(%bBrQMJS2BS:TҜ{Sp8@퐄(N;SR ޙ"pnEwް ݀AJxH?~_;e#RmD) 8CS 8&?Dֱy:뀼MRm:\)ZO;PW<" vb H6V,+0RVVL Qǵ.Zh⶯*#, ɪ-4Q h3R! "U;ԓzCy(mwѤcXn-w2&" A~ޛ?2RTY<㜍RHD< ~k"C"#qR,``~"1)X1)9f"jNxm ;vi쩐ݶ`9Iw&.,axu_Qy᫚'Dl1m:*>7N#LaZ*O\W}r-[%kkBdcDZ땸\_([!9^OƳ'.|Āj}%~!_}0W/21ήG;W#Ϛiyfw;? 망y;C[.S|Ǿvj,yyتߥy-jQZ_@p[UV= .u.PٯF@J=5ɊbY!áQp;ܡo-A/tiZݸ9w-Ѱ3L%*|TMv2Pj(>?k2i&*898-<^j~{k!ni*e~}z1͎ɮS!$zբX<5C;kMd~ps㑜זq^39@o#: 7&sX1+Yf, *sXNq)?dA*3.bfD cJ;8{N鋾/vQ-VFKuX-myMI.Q\%JBe;5Sr!fAtLugw6tB2HhQ@!Ek>^Gv|x>4H01 4c&%Ҟ>1&8;Xv&0οH)%7T;4t0fTFx=to_2\Ɂ&!sm0wmG]=ضܵMԸw:'NC\ jk?F-e:ڢEi9#-٣Q+)BN ܫSpEޫ' L?-b}nx;BJ6:hP5*W_/X2DtwWmy [4斗yk|kR.Ď_/ĖO,Ѵ˕%pR/ĸDeM.Ea,`B?EDe9핿?oџ|F8/`̌/|q$bUL~WDU p4͜Ĺ:).; JEE.R4\ZչARá؟菋춈v M_t1PDcb.a/;GO~;b! gb$qn^mV)pDuJԓbkHo:khZҝbk c;m{3jmըN*yjz| IyTckm/H_љiCz,t J7a;Fh>lM_x-Z%:{9I"dL0"h=f]].gIBg"ˉ38ŜL#Fg9 48UVaE~"k@((E.>7s n N.]- Y6ύFq@p"A3Cm]@?!*<l*ԌLuRu,g:珿IZB@JQuJL'&(SZ&Ǖ&C=-_kىf(1{G1%~eF){89޸bfj*`]sVoz˭H9jr[7؛&d kڴ4m^뺲lsY5"u!ZdDv*/O?o R& g*5גTגQjcIIDQFFîLqq'B_Jar)b.,|כ䒤w0E'P5I9U9Ep>I*˅s |vNQ$ۜG͉*2I5,{W9LL@NM3Tʊ*,ޥ3:-]#w[*w/X0!%b?K~Qb8?#Eofɍ]DDp'Xя?9lX|1  1&r,6狿8+[J$ CmG0kF\38.~:b=T iO HR2;]jkx4Jsɏo 1vsʯ}pVq;ZE5[zQ ]C~'HUP ڿ}pX5<~An-6 zm!5/4ksjy!Q'%V#3Z?8!.3uDD 'cJ :ArmK Ϋe_j9ZIٖ#,UD*Ű5S^WF7ʾwz11T{^(C&Mce̓ VvPlf'SeIgCx@s&gZ:@ʱeV3[^Hc;5P#lX^k'A0LVXI圁CaNp0V ˝ubF=DcIu@05ZD '1 Nu`*G2S) Í@v>`r|7l#!ו86W2_@}?rL$DTPʻZcBHbLrɱҒ[&#H#@qT($a6$ $U/>QJ&$Rx@ܲU|竲\ 9 +@a[A^#sFTDEr87ܲM_Ԉ[ˏ_DwUo$UHdtTz q=# +#g#_w9E r՞ z˔faZ{aY㸧QfbRawW3,"Yo}' ݧqܬImځ|!_OqIb:_{, VU(렶aԕzH.%N G(p5 %`zJ 0#s&sϜppVNՎuY:O+uCMK CKީ6gow!HXɻww)=`2VJCv6 vVq_<־T;Q"q1XrRO0ϣPخA'ه'ngožhᗧ n06s w\fs%#adJzqy废yP~_~>X/1~4L](m>sg[ף4rezXFmϿF+qYZsZS!iwJ><)B"!5eyPI2sԻ AP/ oogt.m-}yxg2:Q~I'aᅩ*su9%>}Zts2t~ VB#U҃Ag`x93el# a#rab"-(N mH62Kɜ`γeݺBQg7W3gXk81%qɐgVRɑv`m0Eh8.bs= Jl|ЄFnKke+BY s#,J)ΪxdeTkq}yX^ɭ`l,(FTjnyV3Fb {h $'ЗNN/ s_%i:t" 0_%iXӋ'nN'{}-L,]ul{PovWSƟV\2#M *\AB֍K$5 lh\n_J镾*;*@顑+}e5(Ah^mkL:|K`(~ F5Io5V5fN5C!1ڎK6ΫBa< ȯo@Y̥HՊ<D٩(9;bP߳"6w$|EY1pH`c~\t1KxָLi4 IiPgHokv&]CbAy:pGЎh 8_Ýā}ϮVI3m:ݖb}`el軉W~}5"GXv*Uޕ_ԹFe*&ɐ]==2<)1%1*b[9Jp;- } /?L/>~n<'2{y`0;ÄiYNjVU3k(~y>PeXU-UUWmV%u(1 Gp`'yѵ`H^4xƱsZ" 2$r{`d^;rJ 9HrT#z._9:w4/i̷;sqR7O&)E,#B -hrVס!<j>Ƅ F'1 u6qLOEiSBt=&)_.VHz|cR .S׭HA9 ft~$k;՝`|2R*;öf \wjhSdo2_?ըr; 9Zwld6Tz$[y#ҬVȤ[ YCyLZ 3 `t/F[754*ִ:F?]e/n4B IC-ݭVӵHbP󬕛tM\ r&E(yR&gPprWp 8b" ^6qtjJzODZgr+<=|e'RbRn*/cX [t)Rcy)oz9 ;tmT7|>|Y/Ճ]rl_6SI8xO6H;c>nݧ*l>I}$ Q|{P#6Ndbh)Z]3K&hH ;?)A ReS ӕcN/6'+ubmw1{zӣgƎn68/$n$՛C"7or$xlVPyDiCJ‘%n , &cfy8tO^H=*Mvp}h xnV7 /=fhmwԋSV:ԇ٪f}nUUrm'?.m>8xQ:An׆g23JbQWɇ݂H)sW)[q?hj[RO8Z2k|%‘TBLI!ͣTH*d:'uAo[\^&c& ൦y%"ЅM̘ыj3˒YOM]+5۸I"6G!ߛݕ nl#>6 !k1$bOtЀ= pHED*MA 7 /#Rڄ6Ji_!J.E |1 #>>mPZ/hsQ:Zb}9qQbje8Χqe6RtU,9U< T/_)Y{Kp*E7lzYS6ZL"1H,1V.Xi' bFp\jI/7{{Pw3FK)57'j;VjtG숨QrϽ̴35@jdq;!"eg5#Nm dk$~B.䚧g-údl HR.,5u[_K댆Hp.R|A%HS0Abu_*2GwB WZN]%J6F˯ ngGL|ƏmEs*]#VS閆L_UR?NG7oHP.%L>K%^Ze%H׾DUR]؏z ƭ>4r$}?lv^q1鏪NLo:d KԵF9 BOɣ  x:9d%b?F3]j>F5Ge6ՓKxX_9+G:ŨB@iLJegdY9AEhIj$\V}>|Yc') LhW ?| n`_w<(IC)\yŌ;JSlxVXW&MZXk߀^^n`pxasaBj\x<5k4(md6ng2:eB%SQwJty|PDH5 9xx* 80f7yw%З6$as"qp;q. 2qn2؂ ohDÆg)JS;$ZՀS!N(CA ?o|0oɿ,Z Χ>ۻ+τ,;'-(YpObY4?xK\\ XHj-Fn2뛷璕f{?t߲[th:ߧx8T^3R\H @iϮŅ4gu~vgGǫ1 tZOJw&t;X9cGG)qaImx~߿kmx)F/]PJ9)s6*I"_5ui.& zoVl3]fz-T:,O"j>>W&i0n)׃!%H5W l+IP c`QT"fGwBoc&s!r| NhYw,R"n\ 6t% /V(b X%O8Xe_.vk ;= ZR˕Z3CSPZox W}E`I _: Qld] kH%czDmإ(b3^{/}\.0|FEIoe h{|?Z-A{lEKwX˽AՄY %O&b plUk9K"'oځl[W g:U]%|N7|Th̅we1&@$X#X)&6YN_hJ -)ReYtrE*4Sx Px3H1RQ0;XxRXvI3ެ"Eh@ʥQkMf|rGh6mT7jլy8]=^_:koZ, g'Z4DA"7-$f,vCFW:qw1 ?Ͱ0c FoxƐ!Jx2Ʀi=7=Wn"mQ[5ϳ^䬎2F-y̟orFzn;ǫ_yُWGF(_מOI\߄u7{gqOE{Eؖxִ}0G?-q0,c<ɎpHǖP[a6 '4Y=$zV.zŽSHנ~j(uXsd7Z)\#)D*9D7c.G&z}C zdk?<X醤WBuWB_1m뙼 nx4A7oMrhWFOxC2>ͥdP`cCbb efsF9m741d  XsT k0%eq(ؐZVufH$##bh R}Yd'WFA>:ᑊu?-&"=. |!i&fXv^ n{}HM (c !`I%S \Zp"XLJ*:s ]jNqW. XVGx)@c8P/Hu2p" * jɛIRN,3jy>UZ|t7UZؒe淳q|8Q:X I&$ Aku(_G+JCd3J)<Ђn zkt? f݆Z:pi-0I}V~0"ZTЮNӛUӵDuz [N 4*ů3ofm[-?Ӭuwm$Y ?&!JQWhgvJ]ԫR\]LcK 8"L cCI8Ȉ$%$3!u4sJ nᚭW]X1( $ -+%ΑQe=|!FT.a=G $JsMk=O8V zzp`pDhvUyĐXf s봪 G)՘HU| %h''\-H}7 f fdzisj {~8] SDfLn=H)k-ȯ'}uWULiyG;s?)Ew6|3=@Ptjg2Q/Veg~ض <߁sK!~ui׳k8;-5{u⬋sm/hiv œa][?5*CU .Xy<͞YygQȅ)־)+cyGJH^VE^-=jz3Kq'W58{||;g5qHŽV¡IZgv DU[9CEi[t^^µJ3<}+ꈙ ޲w$4&bݻZZ&ݯs.R@>ޙ]ewg~Qol>o=eW0Ƿ7p~hEJwn:<̬!͆;p={|EK[&!k,7+m9PBe%u -+5#RgВhGe̅D ^HGt*``L\kS^ ) {pZj_VKƔ'K-wpNç8QKNՄhI*4ԠLi2PXax=J+Z±bPs(NB-4T=DҔ3>.K#λge$䚘wh8\,J * .YszT2qj LQ-Ey 6 q(F %iF{H,K 矠n4<6ã>RYm2) 5nWn:,&o̠yx5nڀ+4=o Roct쓝hr;cߖ͛Qo{;,"80D0 "4=UvƟ?}N)Ջ%V$bxGhj޿7˸O'1,V"!n,  c'p5sI@r NV|c^˴zOѬj֬B[}FiW44¾GsrcM^Xp9g{GZi{%??F[RPK|X\!?Lo}(EQu1-z3y4;5 9ThD PDǨB7MZq04T%$0O|c ȸ&<ޱ\k-行bWs "R(qPCX} h)~^|1m] 3%ԗ5dV{RizwGRDE|KL$$gX!&.>L Rp]>9; aHU ܪA)SBݝ2d@w+mS` *pp~A#u#.;f/_9.4A}yeBSHIv*!Dĵ{l/*mXdeu(t7^7sAޤw %_SnIyMB/|]FC&0/b0f8]LsuKO"9p9Z2e7H~ɽ{ʁN咰sFiĔ+46EJ *VEEӰyĜq+#vb'p,.2KiS/_qf \n9)O>CݯS7ɨ3'LujCicvzB;||j,*I䴣8]H|xB_\ JD] Li"4 &Eˡ~I*@hWпf #5 =M#g}駦K!F|V"1 K,N yMf;f5Pk?j6;ct2CH~:_:—S臧M[nQ&|E'I,{81so4RizvZjCH(-4 H?1 Tn[ٌmfaBBXL_i^QQ@ªzm/0lČ,}cll kr#Jf j^( " J|E E!>6NزOZpmI﫼ԇa &]ʏ{>]xo_d"F~%5djHwD~8Vozɛna:Bޥ&Tyo{3gJA3ȃ `5.ϼyb? V86.G`A,kIwnP)fhɏK-JN Q0)|M?'Ӻ[ >`4խ鹳,meAe-sJWI[&sb^<}82!8f+JeKR&:i]m 0M8'$(HA- s04V؄}V$?[9GOW"d ' SʑVwQ5ol%8_HM!޴?N\1.yNp gZƑbݎ:|?.INgn?-Iu{H63HC6%Jx7ݴTUE֣7[:{%BֽjT!|2_DiSʣ//oS-<W&Ov 0@5ZJn04:vm78$`Xk,0~f9Vl.:vK$Fxj{BҚHLQc0 8MX%@4k*玲I0I✸J1Y e X N <EQ YHzf gHU=DV^ALeZɥ^ku$8/enZBhW(Vj\*2 ^G ݬdVԳ@$ub$sg-LU+CW]f,ŘS1\ )6{q2RPʎ&XGnV2y@j$挂Kd#5cH>Hs.Z_9^xːK\33Scb,rB~Xo? &uu^"$0yk2ok z\CAk=co_&#emiB 䈰-ّ C[W{h80Hԫ- E{֪o˭-FdTn G%DviQ.7< \u^c j6fJ,&TMPxf}f͛#'eвVaэ[ֹŐNԠ؜asQh$&}(ࢌI@&ℓPƌPMƂ%(U0&2&`f8S!"PHsSW 0XGnʑ.W..:՚FumneB6ge_KTWFĽzYnv(c4{G"2@Y~ے<|sS Ȧڑu s339 .ʹpEZ:4jyu^;zjIx6 ?ëM^T'7zbG06|+,J% 1 voP.Gf1#+/ #'y%|nYrPGހ6A LCW^#Ai)D2"LXadGR2-]e~yzҤ:Wkk宴wU㗼/<=6pvz8=*Tۋ8'Pt,^P:'w/' sBpwnu}(K]. ƀh o w+JN& С^U]ĖAٽ:Fe!ui)ejsy"]@[u}0DQs98C}|PGh>ϷM/L~ Zfz . z(%t=EPU +,FdU) G[V; {cdS{J@$IXXCm6YLmJ@y PNQpNWv i(j/Ԫ8`ZW_*kmʧאeXOI2w^-92En1%!!!ot 7"ϔlOyN](IO(X؞΋ҜE<++ R0) _3wG})~I ɋ:΅|3{Ջ8v*cXz)aUCq$#7Jé [͖)p[(5{of9VNW|""S,FBh3ןP GCQF P,GspD}phuΆMmipF{ߨc%(cֵ5R.jW{vAg!@ESF1ERC0ڈrH"S 0I*w]>n\2qx1}^w/h ¶3Ɛ')b{ݯ޾xgY|s!~oެi瓕DOJ/h\`S~J޽uT\]~̮;w3wow*os#J^O?p>ɻ:g3\lEJ E982Xc4>\+w]cc حљ-_{4xD{[{aivԏ,Z|)t2^-haǓ5lng3l&xj:Y_&scߩ;fY.#t<,[AU^WB! [t4lqg~ڶ[^Z]Z]Z]ak՘̰w.>rt$>xsɔózm[q֙loYdg|ڜXY-N l\=WȕgD=aDy>My.v=ϩ$F(7%DDCTH(҄8NB:QBqH wL\6+_O0rKfB,/|hВtɿ@ K \[x^Pp:Xnm^@8Vƶ@'a" =&Y"f'0%+DS}{Yn`'?%8*0ZuD111eXȡ]baQEnjɷ+BZ(8_}2`MXDBf7qTV]9H;8YoAGthM(1әI8ΊٟWwΝܹӂi j2G2kD%EJXJmB8™Q O>%vƆ&ur6yݻɢX\~FW^\2fleS98/.Kk3l5}rpZ{VݢFtͮ0Ct%h71e T鴿<]|urdxT*X(EӄW'YF"Q_jBD("E1c;$||dZ.6M<7HG~+ӳg(${}ҝH0:!")x6fXmT9*\xQ0"1@JqWL 'ri@KB$v/!K BP&R5ٓ.j(%<)!L($N@*E֦'_31\Q ym8s5wjxS:E`6ٞ7V&ɓ+5۹۸ ݝ\ `-e?KpQXE)ꉡHU&QMH݉jѩQ@$Lb`$)<&V*SZv 8Ð3X1c QPZ]BQMcl@H13Y+#@'Ф 2BJ\lϟ\{z7ۿO3/7&==_zɗޯg ]o͛O~[uǧ}k4X`o}W+Oۯ:N`~g}[ez\gOKȎwOQ˿BB. =N'{?t㞇ka"+4_V&1.8}@FhxHYeUVfVDJIܜ_q Uk ?2'J0d}/%|O} )iTM.?\v2=OŤ {%0O簀_^ؐn ǑÊw" a2 [b4r L=ud%1YǞBٹeO3:IJijKIjeBUPՓ+ Y_ To7&#on*7TCx1IbU'61+P*Fc5R&Sf{'\s?0/=O/lFXhcQIm<摉Mš ~Jfd QgHb]cù LowїMqS֝ZPf,N>X9*R=>#GdM?NwWmJRva"hq p5E ?|- ]9p.r1"E9dԱUӭ $D1 7X+dVXḺ{fDD0ű'x?ӟ5A{ҥ tF|:Ge?그.5?V֍!CېUG1f_ܑqpLp&\0d`3A9/Sm 0Řd_Lu]E0,~JԤ='^ )۪Z,%O:uI昀"ijN2G`15u.a{;: $A> ;Cwcy( GJBc1b V q. [PV-6Z*oŜK%zm&eF)jau{t\'9vQR<8la'p1Şpɶfpg"`%n){\&}II/&K*"D)` ?`Bq MPOhjی-܉>puGhL4 MLD~x{kg>-+_KW_mKq3E8?MB9qJn_] =Heʁ/Es>.L$ 0`0b`c=6jkÞijbb3DTb؉(p rFKqBe3D`@!"kB{9l*Ž9囧.BL%y91M3:88҄(a.22Fq HHtZLZ)+M5f.4Oׇ/xt ?\SEmKV(-]0(Eg(.J$Sf쌢J͑(NXR%s!QZuܒRlzSڞkOUƤPɇgef&P [0ֵm剓 *XIbc-1ҚSo%B k=Za΢.c:WtsF'wLaC&#XZ ZxL\ק4 ,E@ ZE M {!n'Jiae(_j<S&рK8:_ c-%|IfT?>@?Ζv8vpv̂q8AAro%Ue$Dy/%TcЖ#("f3Q1NԡYC؂cPoYp!:x';h&USY kT W;PyZ+hQ n}(Q5)3 S:0M@at+@$_Pc㙊uUNq*x $b(7 s(INqn,ax02q `w2B RhXY'+)^ jiL#)%p~_['z!!zvL7Nᐟql e͵h%N`  iq.il0 "b"RL;m!04% Ui/WYX lh;VW1C =^rNeyiQ>X7zDDZ".2a\VK,NbJኦUcڏ{c/bJ3P}[Z_OPzY"WT_3C:8<w< <ŀ kV=ALEɩWM J gzuŰҸźn,ndRwG9L B;bïr!+S;WGOY?HĪc8k)H+@ŒG[_94TO,B% ^oY!BqDwC^(scǹg9"U R^!L2*Pb)YyD(ɔ+V9R ,S+*q0[380F (v1 616$(  +Q{V!ńʘ4?lth$n5T|kSm[) V)'OZPar'R7c@Al` 0QBbjL*ş'K1I} j"Ko6U oz&0t xsov0ZfOBܹ}-%?omf,GCNOuo'߼l6ގ>võK5~U !={zTceF5FdʠZPU4^e&sĄv5Q`g d"Isf/xU>B%}ojD]Uvg)F(y$w}VTa4ib_:Q=ߞPp; T xn/~:L d|a@'S"&E^HЕ5Jz%Udܬgv99b}B>&M!~3ܥH!w;B=;@ U%mX"x!F< cw{AqȋBg@gEn/|B8A[.y>M1FC?u@-=@`U^,Tc"f 0o 2Ҽ4~/o> 64penEs k*g"oVb)oyskb V86|1Mj%~fävp'6ALփ-KvxJ7C0Oc(&2d$L .e"8a$&2!N2 El÷Ɏi`^X9",1.R Xlu,yF/=IgtK| h8:`RcXFTl7j8fa%ؿo4x+m跛yJI4&K\?<0VXxak;}a%(qyۻwE]Û7d2x BS L]&\c?K2 -_?a#]^zz-QCX`_ glD8 M8nqt=#1(P ;fil=TQlJ@*pP~ FE_W6=0mjiQEWiC,y~>}j|楿wRTkRys"FfE"IU&!JUY>G&H$\IS8'y9Mk'1[e2ļZX͵5 zc.J]_޳w M\o'i( "<蛕h~Tm/t^)@SSU@#[d)'a2*zܜ6>g:V*Z#K]Guцnl僡6d9eJ7iccN(a#mSEЁ)(jHRLtĂ7 V>Q'2g<&r~7oP%hZIjfNL;mb47#o=i2}KlweH~m`}N ۃjt2,kZՒ쪚A d9b)(eKd`0 F b w գ^jU4]-^X+Ph@P?8Ae㶓7JPq>PeO<$?bTo`"hlhSgd@#Xx bҗX2pcK"-Yŵa`0*V[ArXSzRQBk-LYhŢmrـFi ggC+*@q٩5Hud+|Vf3>Kp4R,,M|RqK)?#v(- tCZ&#cn|_O'Aг#[+FY\i֗1[|xqZӈ5N6vz)=.(tʂ1\k_wR; kc[R`j 'yBN2b(XW*-YBH#ҥ#ځ\r Iْj3Lj9ΕTnř`!u =gN ǽ%ŌΉM-U;8,Be;iC*8mR9 CT݂Kl' ׃a :=4,ѰPS*ժ`伲$()Sާhb_/0Ey;ⷵ”o.|[GB]~#1>`35YXb0/HVr;XYLabbσeOMdU8e} cDbjT4uzQ5,WIz;b^Xj*^rPUO1Od q- 6,r+0[]J$U\o࿞| 0te˅\6Ð暒hIG i%< !-N6iҰgn9k_4 :қ! Zt^89pI)alѴ)aAəo:4''ǥC&=m1yI!H8殶(d,uf;:o}[B88嚠0gtÉe0[ :C+ EJ]0–KN7Rx} G<j]ٕğ}1;C2"X lnjoACujw(#QJNbğbSl7&fFHyOnwD>›Ooe4$ad ˂(sZ`˂i)C0B }l@Rzt|A`&q0iaAxߪD0f %âV|??΄s BlfFufpBkž'jlQzz!o&N9'c"qhCD)BvP\kjm3_ޡ>&,Kcwl-Lo0tӭbZo?ܦ Ji![hte+ Ŏ>&kMRztЀ\sK`nѷ<5#g]caAxdvQ 3KKgY#^&//^)x9քS~NxlCb}0 F &cM"uIo}:|8;̨f0~pRhmobV j\Jk\@R:Czt&M_ ?xItg&iMuc;ELUa08M\_eCUq]j@&|_.,qq3;lڕͷ2Ҍv'v4tX>])%s(DL4n] 0< nr=-Fi=w׻]z9qq8^|c=X<=@ =X;}z5&-RZkdֽ3p;sJqz#!TgF:D.WJC9I|><]o^d #=n]NGut'ҚZ6w*zUsWblQg32DNKf}WDw$xY!)pMoJL'$ho|~Lވi!g<|\B2Ӣ8`"E[gQzy?z5T#@ ",l1XSz1臘^CL/^tI ,Gqe\] 0g#$^q?x~heyrXC5zsUl @: T;y`~+}v%IէFm\4}BC++y ^Ь _I*星i%h!yò t`Lq!$wkxTTqtI6"!ڼ6Qʍ5#>O֙FN)?m_Ϡ+ۓtRDSmџGR &Z`02J8"I S `o2x- ȴF##7u;%r)S˵r㪞`p!/Sd5Ὕ#(ߗOŴ` +(~7*wO_̪W1\ s79^J`Og%~|oqQ ~όle>g80CH6"gZb^\KE9%`V>奴 ap!d^WSw?ۿ0\O Q5F˂3ZIpBN2b@B`D@tD;Oyab&鎲ݟߏ,E"5FBHD,WqI8j I6W4`.s5e*j`*(rAPki ʚWr4jԲ ȸVI\lL #a1%@W V`g  Lbk+u/&Ns֔RR,0_Dk̔Dk3^eRB%xb䱃1z@t$80, `TNi}z)>I훈f"pNpX_40 {9aG9%Y@s. ;%kr]a@=VA9Z9hFx 3P+ڃZX#DX/5SKNivkir%.;af\Y^1.OVN( ҒP7aj `Ҏ( ,<5K*#9)o$3/!~8&q=zL#W;5ƴSp/ KE>qt.<-y}.;PSd UU#:).kT#[XP' B ua#\Ǒ+<%=` |1&;JtEǮ7Wl]=:C-x4M# U8f>wQJ+F[KS %9j)4}1/iU#9}̰2(LpaDlR0V')VF$ '/파9%jJE\FU WO#z(RBq픞)m=#y''U,wx< h `韛5IO0&c F: szeJvpSpmAtG70S1zos2'>ϞA%{\X"c{dAuKs2X_1b%1ChT$/,ͭeEE>ѳ6 !,5V+K.[ceGWbp)H$9sӹЕ(nRU$8VSnBj$lB"1@YRFQ*y"S)*WQ TQ0/z Ƴ/MCNr|jPp k.,+66}y9Ø<0 hJPp@u'pD '6] {6^D2$u \{0G۴h7ZXIm&Q &%8j ݈Z OqAcCy{6o.lU 5kaAXTYA=ӰqeCAbOX[-佘wӻC~ޖҬ h BO#8۷2I[˩6Gd'sv(W6 ¬V$>7}rfq(%ERS*"4b 怆#Z#:ىRcn8J&iwo:t8/<bG)8(6=p.VGXA^lT.)Opy\t@{ʝTWXESOrx9Fu(w(.|~+tQApX+]+r*(gU eDR9,EXcfe%*54(kn_t@{ջy] E.h._`(McG6Uuci U7 "PzTFZ#J#Q;ZyMn(ڽDT8 G.T([#ŝn:-9WҾ3K*ϝ7Pk $xߟ݇줪NF('ц}7:aG_nbҏ/&U8j z}XTcΧY O~zYƪ="և]M^77_|ɨ_f8n1;,L[?X`b'ͽwN3.w$55&e # k3A:|GaU,3; T͡ǦN`:h1O)%]w_,mw.ER BiŠdQdg >}/Qds@8i?H9@Fr@nªB0Q17*D)*f͘hnϮ-2p1n_`DXիtrq8~xBSImDVkJlG' rG2h 's>:;Pk2Sop̛qډXF"LH8s16:G!٫kZN>szK ے*@B;~P1|:^O19YMR9V˵޳=+F3R`a'w5C[!Φb(btdu3}s#Щ- ׭9K}F@͂qTb^uim[r QbmZ9yVZ4ݩ܌Wb(Yx~0oMQFcb*\kU|O62G(AP[_cbÒ_ᾏ#km}Unoo(vUBzK'Y!Y,{?wx]<=vp1"7|A:q S"Q\1cO;#bXcFYsǮq>g7[Ӈ{ws&.Ϟͳ3Gc{q33NOXE>r>u ):Kw-x%rz+5^l_vM`uh2~~fٰ㋆LXwK}/_}T~Ҙ/qbkl@@!  gsdݿ>% ~Z;P)XQ"DBF) ka\ eN܂pQIo%xH\&ɬ,ÚkI{-x_1(raDlR0cTXSRp파9%R+Q_z7#QLCt4:(^̠ Π)RӇSvżC$$ŰJb$$c\$Yppj_0 HOhYwtSlt/QfڅSjj}j ]8~00T;V^H9/nB,XbA~MFSJ0tԊzv_fqz!J򨗻/aG6Mv>fYԌ,@=OS`o(Xumm}C'Tϒ""ʦ ݬUܥÒ[ G ,yl,b -+0ƅDb_AnNXx]viIvae¹;hvo4֟_ƞlFU:'6w0<2+/on/R;=M?_zYkNE)KxV;uKPnKҭ,BNwn=Sn*ݪ!|?`5a%mnNJ/7%ʍ0ԶGT_4>0(oVO>O!4ߦq4}-%Eڍ[9ڞc!R Үqd4ۡi$ڻN[^oJmٍRB55%"jS''3f5b3V:lJiD xDTN}\TD3 U[Htk1)9wDJSf<Cfw7?1e7fY-*w,﷖ 6q70yn։ g +pa0RPPzA(JR"fݣɗ< *`"oKքXc(%1јfS:Tu{)Eh՚#Ŭ 5bVk #û6k2@P)mk$ƚpdbeQLX9 F'\Y^2*uH֦BKU|̱zd_ΈZsC^$@CS!0 #BRr>ᇮA8xCAu.H_3BU,r^ j*iu[`܊l0DkEiW{SHG8C=%^Tk>_ qJZáyC=o^t\>]cV-aeVôR7]kr Kp׫_+R%jێ+%X\if䢥^׶B*Rki\i9-ԁ;5Ҥ[W!^V~)QRwjhʐ% *NIp؄3ZAU%)KlY)Nv;/'%9.ċ?w]X-ys<3K&ϏuՓUw>8Q3gDy6xkɯXnAR#)|)d",F1G(c*Bt! |@=tdY*/z,Ӣ0Ķf_ggKumY{,1r帉lUD*&"NQLF,\J>IJcnvOs >Zsgqg L(ئVh~)E2=@2O8oxi!WMWd Z1撫@em?e^6ӫ7Yw!_ˮ$f.1sfwAoa g XR~| 5&)̚t4->Dtj5B<JRLrwīHlҔ;#&EOYtbަs J`mup;)3o[uWo=Ar30{YV5h~%@م!=si8t du0ύFO0&c `6J@ EV 2{aS1zos2'>Ϟ sϼUg{q33N'=gt>uq)Th}c%?mоRUbpI*edL]82y)F3H~ad\ _ *2K$vsk]mo9+b[;Ya/=n;~AfWlIVKj9WrTXdWգt$O| ͜knX+=N1̄ۈʓ/ɮ {asiϜ9Ā0%GB%i)!SZ$ : ;&hwzpqY\g؎HJ G=Q'^@гchu[䄈Lz9nL脃㪃I^|,@!=՟-◾߻#WytvǜRO$O`JIƔ2)y`X*9S\8F *)6~S|\}YP7+iZIL/6LrICk3f#ITLSCp~i;e|8,M b4Ɓ!}\+szqgLrm|8ȚWղ D6X8Kbv OO=ۼjijk?bՇ?GbOi~59|$j&; }td jmF7#k#Y7ݰ:zz@|Hp(9GA&婺At|Hr~):rED'z_/ip n*Ym=y[[.@tF?glCDx;Jy1KJ)nf/$#"6YQ* Q1BzF3EA_s WzDō"B k iN8CaH,Lœ7ų A (S̠a+](D2JB˹k4=%cseDd;S$wr?t9Hqܔ+PDЕzg 8%sI8BmBCL:؞?s{ Sw" ':2?I2'ayLmhVpP.K!:r_t@50%u9:tT%2j;%UZ\JIZu RIKQ/?UWvI ңeD5! B_򈥨^ `'2 H[)M߲b:O!ETRin-1w#Ub xr9nO70B?*^)]t{))|h)%e1u=meGLZ^q'8)t&b#x`$\T!BTIKam&xHH` 5U$O $廠4L}2#i9IJ8+02"Z%_5?]%3i519_3Mcz>A_]CO`%j529ZS7#٫_TN~ k#9D Im'`347T_(T,Jzq)/ za}]@֌]έ +0d{ڒS#zf'Ltx0g  Cs#Y.DaDWaF)bv:)9H{MҭJ7M]M9JIm]ꏛBo<8Vx O) H(jAqrϩ̃`9Ӂtc~i;&oxlw,8e\پ&?jQMh,9fk|]Ԕ[ =ZĉY1P-'+@@FOAlO;ZJ {Ŵ39R)!CEG-! fS8|;4КqP;vd_Q*#)4Iy'N(x(l:>ǖƑwÇ.$ Lz!Փ~>QCҹJr3sr?5)Hqa$=DBAD1J8=6^9v Aq1SYl[D4IhAOmU\N]$1Udr]]r<>f>}5mIx$զڒTOL[-WVM)] Kcfb8(N[5cW u kk3 1M)F9)ys fi͹D#*0`J녨r< BTL5{9EXc3zoDc <i d7{5F#Qi22BYw:A5=c#m8("!E;FM,m=f'Sw3mI*uLfOh'u紧Rbn9Fm/59?ExHw{qitYAe rU{ Um  mp7O)m }Uiæ5^>/p08Oc֋Y@_oūx&$2z)zݵdgmbݥ;yTchc4|{E+)ӠcG8~ ~Oxb>*~]?Luj*QE<|c ΫڳTUu/X>vVM!|OMM3ىwVhwAt}Fvwԩѣylޭ y&Ŧ8?9n)xNg4nnY*niRc,䅛MDbsBz.cbŢU~|׿}4?i} ֟~(ޟr6}\m.(y\/?KPֺ ]vtٻ't-~t( ŕk&A'Th梿?)͓cم'KkvquUvQxWdת65+\DϗVBMgX=;5UIAzB+%z#5U7mw\L}p9! TlVy]ZɮRRV9K9yL#b*[6}+Q]@z@ &=5I+Fh)eZ]'@'DYAzF3EA_sZ\qc6ɜY(!XWp ;x p(w C" Ot4nqzݭخ3в"6Y%潝`CJ&+dQq|6. ?VO71r^8t% 2?هWo"}&2z}q#/:`xf Q /؀f[OCI#)h:sJCMg9Jiz= _f i|'JF)!>5E[?d^:}cgLn\'!|rr=ALUrM0KJ *.H*]bE@pN4ZOuo--z`Sw0O@C6; k:x|%cFC ܧi\=5 mfSÙbB1ǝ1|Q,QD*b Aj. ჱ^;Wi5KhlHھ+lqXTٟi?X‰J2&98c@ T0˵ SDBzBBe,(*ݚUժL}鳿K×ROUQc×iWLs#I M q07gfO{5%@~QJZSy .|~Y.u'a)=2!BL9)}D՜Bo>{g2Ć*+ kaJyy4sCjҦg# w QYii qu TZmRQbE1*"P._+"U^%9x Ua w^S0D+9t57D_p?{Wܶ _|R*՝l69$R 0%7~=_@@B+&)[Ay==3Y [H5v$QM-iĹ@)ƤDbijU$N;Y`py~Ć (LPbt`&#Ib+)R+~`FXjᘤI\ bF ,q﫞_LEށD:l̬5Z'R\Zm%P$FD$M!ZlQY[*iۧ˨hd6io|w|M!~CK-΂,Ńd,#w߽"^@krnyYo}fwz ~;x#\'=Kt8q> ǷW=ϜdE]f32ٷSTyXi>^2ـ )vwJ^~CM0(-Ia)ٴe:=:SYLL$U<,%T1FM,j"O BPxԒ)E( V|V`dKxnJ)RV*P\FI7zoG{fpnZ6-(1 r3Z\$kvyqQ1:eQ r)˦{U ,V]m1:y$Bٯ} )sG9& SjQ JxfN9ֵO?u`WZXQ˻ ?/Dɽ=%_ªEfjNp E0O>ˇG>gQO׵_:L|"/o'>K8!3u~+:B/,fIB^0$)ާu J87O*)WSqjvCE#CPC[A@jAOA}ǩ-w&VyEUӧC*B%_TMW21PAG9=LYɺTaYqj`{2]fT9{1Uت]HXTU;|%]Ǯ*q'(E m9Y[g}Ţ@OXp(t=G?Ug5KTcKꮱ,{Z_qt֏>[W+ S^Q@ BQuOUU >PUJ*z:wUbJǣJUbXn1j㳶#±3%$y<[$YB n=&i͖nX(|5Rr9n$~Qǣ'ZEk5*8t! \ő+V]/im -Щv߆wHdqogXMjz pûjPz4QAXuf,5ܕr̕R`t3_"X#qwՐZ(ݨ? L$ >Tmpnek"/FRU >%Th}r*PyDK.?OQE 9 1M .ʺDIZLQ}b >?B19HZ!|X bNjӌ!کL&BCtQ -WdkHIX9VvԔb26%ʼnNdR+JӃ۬\|dHC:sm ~]nWXx.ިeuKs~NgtWU޶TY/wͩOnqxsL,vY7yʯٽ\{rUf T5ҐWutJ4XźQV=Nhb:uQźJr4VmukBC^)ޕ%름uA}FvDNw5Qiq4V|ukBC^ѩ}#*֍A`b:uQźnZ&4䕫:Ɯ7ׅM"J"qNv^:J>HJ$8&pU(BS8r.:O߫ 9CCcAp,$3 tɏ*Aȗ,J5N6E{q"(MMNIղ")?s੫0fɢnO(cJ՛u@?k֛k|kUaf=:oA-)m W{Xn W{ti#M9+5XQ.DφaDpè1;)"n@,4gA>M)#t?L.*8px}s<Ͷ;b=Lﲟ #_G3]f&a|x_!*sxWXB+)!/JIEBN1E6ifbƸtJ}$S;E PahT0DJƄ!Z8a,0i Sb 76jTVS<->;i_ZZi)Iߞt2Kx 06BSDR0[$ev*Mb|n{D*Tpԙ.Pg*:BE̹l`E*J+؋Y"!b602Aɹc6F "\jҴ̺FuJ`j1Sp oR):JT`^D&%Є3mupE)FkMSPSY &\=L|y o[sEkCXKw8"/AcHl0w`I5R *UO5Q"bˆ0 fu*k",Haҳ`4Y҄QFp弜S>!LKuRa☥-+h.4c+1&323l?`>APR%,#<.hgH0:g/"VpwCĎbS7f+F[!Eh nL$oK{1n]V/['9ua*5vL_`h-Vib,5&# DN00[ ؀8d12qӐX-ä.%W NY(hxY٠:aC2Fric0$0?X4UVxҙfqpdwAU`Jl9o;Dk=?vի RlR_i|MqŸCEzqud3ÇE`"\Ĥk1o SzG_KD?sd:[(hTw@H1vtLXU/@"$Aؾ#F=,04^~`ey s? F ?E_,~7 "vd  %0ʗ@(򵃯}@Mof!"-jj],O /DYI_^˯r>ͽ scDv8{Lu89R|LPߧM*NL߁*?V&vZJVjضŁLV0sU%*WLE3ݠފsJ. JȀ*pV ^Xnք-C'-<{AA}2k˯0IEם7xB,}L_E@|vE?ˋ LRI.|);4h;I_:SE20@D_w53u8#r7o 7imeLt;[cnY W%#lWzYja_j0LNp\ A|}T QI!8&wAm0i8!6 'qsq9"yMPUE)؁[u5'KnIbKbCz%uMHbU-|WzH_r'V=䎥џy2øx~5<] |͖e1*%5Ͷ4i26]ފ T(:S*%i^{w^|3㏟tx; cy|-rF9+lMYAgh|;5 Ѩ>dPᏹye b<}܍sSwdk9{{a^k,}ed:^GZ!Ҹua割,}vroƪOs| *)ؕUхܺi! UDZga;L'AyƋ$G qW9Mٿ|r8aZBS( 6 gT -!Y(+ 8_iN5{Ng3c);Qx,ߋBZcV,%vrwV|e(ʴ!krt4L/g?/tx:o<7E:wH&Ç~hJ VXx°R1f-2Z EݏwmH_evŗpd7|^||1}(T). |0]msVKyhyhy4mIB^.SR1Pg2j W'l*Йc#aj>$t̀SQ<8.h%diUAi3 gvGXb)MTqAR=qK4tqBY!Ԫ[VvjFCnN@\RӹT#뿁9 ]~)-'Ĺ%ČP""9Cmo-dq(wF=rNFpo.n4}ZRb2)MfLJCet1X- ZKR"Ip"#.)!u8%h-4&J;A` ̨}OBGWljEsiuNTJđ,J^;˵r ZEVO?Ɋ-}mq?FLq4 B;,C, %@Xi8ͥ,e@RGd-hD <4ڝ`R{ZQ2ȨT:$˔6M%̸(# roo հDh%/&L+}^@#XpPQCZ|D7?@)=W(+#{$k XaN 6x-9',f P锣R9dr^4@MR:͉C6L<)yG{d`Pq^mDG`3(;%.)D:rXg'^hQ 3M ^֒g\7J-Z,El4?Vd (-,7UnBe 2D)rw^+4ӦFhՅ3⛅ PPQak<-!Vrk<:ǷB}bYd">ǘh\ZfLQYoBeGP&PO%ȃ<_UgadHxl- " ~~qN#R+?P~NCճ,v,e&Dj]zɢ& kY[@![NAR򄽥g_ -#w$O*)/L%sIxl \-uLІy_,'n2|9!f Ɖ\C:z" ,R!>4M™9S0G~pKS4P:=}г;C|{&jhmn`~ [K^*4VuFFvPx&o-Y!YA8<)K%ݪ;*_ذ|4!%rVPۮ+==6>eQ&_5M3Np5TT̈ZcM<0>"FVȎ ڛÁ-t\n0qk4\Q>F#&@`8N[vx!(f~t&yCyx$P*i_a{S1PdhX9 F;BxҀ&b8U[x%<9y7]O[0 jȢNH'hQ X%XVQ* LX Z6Oh"sNFˋ*$L|=Т7 |W%C!T}(W[&׉Qʭe%R1>9: X)Sz/ qf'ϯeEsȭl:עAy1[f?,9G='˱H* IBjC=0c-?'o2eTF%RQ 4r4i׷f{i?ti yef4}l\_iB^aXrAѶRs*n]i/c^kTi%w=+|U2r̟*iwW@.7uuʸUIt'=Rz1Ϡ+=G'pEDRg(\tK'q ` 4B_U7Rw"&s#1s#J:'wXFk =Pڄ^dosC mڗh >$j:6:1:ȀLZ~3c9Э/ +AUW!F2i#ΣY:^g?Ū9 -5`-*fc"&01AH*Uͼ'&Q%^2e˨*eo9&j`%92ƪ|\G,~ b^:4NJQMf J0C%w[鉖X`xIFX鼶XL<~ `,c|dY4SBCq@8'&i-<^8)p@~Z~pC L's/93BoF4I8;Iɤ4? bU.p7'DJbҔ9,50V"  H̴Ƿk_&բ4'(3;VXDz)m Zi ]сyË"*.Eiro3gиkK'SOk?s%qe@XNs2ќdZk,^P@>xIV,u`R )j P7c}!.4WxGvi.? q ( {bgy&W9N{us1fImoۏcTbʼnwHKψO;vJ9~z/)>E[TI]]rePȅfz̦9~;UahKn+LvxDs:#`).*\b9CʟޣQQ ;d{g|v9$z+]6&yL/Cbd]OGkW&T:2wv ?z rC+% ۇoI;C7O }U =+y*ݧљDsa!It9J.{_8X~q\t&@޾#[>oг F9 . E!:^E.:ܼ(->3 R~5ÉMi3kk$+.[Św/J _Td^*M>hx"9F"]2YbRJ+Yʧ R{1^ "|~{ngZo7Oe.W(9cT3 \W?v^ŧWh#Oc[\~~|>*sK| $.B7N<ʿTO~wmI_!n_e _n,a3Nu$$~$E q3CQ^-czj~U]zaR/p,#gHSn8{UL21f4/g*}ҎGɹO.L*$zB"bZXdnI'ȎU5 !Йy(g# >T*MVY'1 PDlb&l#A>86=|T|HIs *b1}IV b-SFYƓIK$NCxKPޯF%JZ.(H @a  K#qh6iIԋ@pQ״%|ϲxlbƳ8L܁fȹ69R^62NZ*:I4#c~FG1ZLak!2Rv9Zs 'R=tM9 ne.zQHN'ir؏"HMkN>M$v&%X98Jie*34Gg1}M Ӎ[xiq ?Oϟ Ȇ/EnSye|Pjch.]gn2̩SIbҺ%:c_jdgiB=AlI># ki Dou.o#V{k'mpHx&y>k%wڸNSERA{!z fv揎XF`K>TR0eU#yG{l5 ̍'Z&OZ9.QJ;\N8c6S&,X2)I8Z2cT L )$]q}DSRUz$@~Vk)]m,Gpɽ^;P\q/4.?u' /i+8'1a!^*Z9U#EUQLȃ;aP| z~|c!H'[qٌ4WݫiDBWJq~J:KD.^c/Xq*O R%_Jo 8v[WX ƞv!.E#Cjބ\s_'FM9JJ7)kGme.ՋH$7PZe% CNNkL6>:罳c1uǐV)ǂ=jl0y6X#vM8eɈ5~t džEe*߆+&@)Ӳ;Yڐqm+iEI69g$ß׫?<8+%{޻wo~u7o˗vj~vyX /;D{&.u`&^XƌQٙP:BB'Rp{WO' ӬQI2ФՊ*EnVzpy*`E3%KAy/nj86,PڎcF6ٶNlW#SXJ}:es*X7a6"mKpy PnssfSx.kYX/*¤yMH6qn6Ef?:+C )42/EZY9[^KY^nwؕc,:7L Zu}B廊bnI:yiϛ] 7xknVPr9+U&'X*[m-^M moɲe [4HNyhxr=ݍ' -q,W'$=\V~rזSvj}$y~Kt`Px}t'pQwnFM6X턒RwA LHU\~LRR.:q]ytY~czI<s^Yޤl;n_vbN;^6FNUMoV.U's>h|4Wއ(kHY:;O.Ap LFj:r*-k7nhˎ ҊIo :_#x7;Tц*z09oAM3*KJBCȵRI1,ty6o3&kH\!UG YQ0Dލ?VVWL8䋞P_C䋾H'J:N?]]JȠ6.x4*2RUzzA`̈sSگv$\$g0$RZz$ Nɩ$1tLsybpl"IrY /hTY6嬒'sН1ߙh묮y8ZQ.vу>p_y'xx|E"_nWhXPIv璉ݻ˖]t a܇E/T_Ct¸ ܘ'b7#[[nLSS[ڏZ>Y7K =Qf~t'jm%:hF C[U[A=]V?|cMk5Xз֚(5 +mim,?lm[w}7w?Wضx7ٛ!_!$ceՇM<ݻYkX_>ȖgZR$$SYjo#r]~mVOfJn 2z ?, )l1@impx0}x^L| 2_+)x7;;HJL?i1ϝm0;QuG\f62Tj!S功8RolgrG"Cٓ,R|s:+㝷:??⧜vpJ{pc*͹(~]5L(!gBV`6E$YJj6Le5EJ:*#87R5i҄X }&9"7FQpȓ"U[k>۸(bJ.i[´Vmٵ׎`qi[I'%nzo*KI*삘0ș3>m96 03*! =2О+#IL CLȱ֚Jek+RbJDR]f22G]O:gs4Ix*t9Ć&8mޱbإ6L9RHP % JA [j5F'A0;/>0rQ#zu?j/w.{Ї\(nj'$VJ' y؎bRS=N%zE/t[~?hSs~5|k?uo w\2Zɉ.m&ޏ';^2~c\I%ަq!>\r6RNTpm?x3)*}o}9P+[2E/PQ*:n:w[?}-LHps%%iC#Dggd gVs%Q֖"XuT.y=FLqE~"n`J6#p~_}JL d#ɾ^rB_޵Fr+׿"K 0#Y%dA:Χ$wu֣ CH̴״vXwSdU甊3 nW0CꜪt`:F)m1ċbE83lWܘ!(y ]q.J?y)R+3k(şXiy:S1~O?Ʃz{>f'1+CKQW]8 O_Xf ;JsƇP7]?8郓/է7uF35$ýi} T}\sYm-lYp=oӾ]4}Zmvw; q [ /vmptPyO?|ͤy%NJ4 y*|Ju7n(qMH\/ cŅ:~TdZVOwAkܻ>-AG?HCxy-C~(鯿)1* -xRjvXAhAyI fPt$Fc+%gD qӇ"_#J+ؚ>P`$"4k9: % ,3kPV.-],P~[Z< r};<"6Y JN4rD`X8$1䩄^uhTLm*S[ A!F|ui68F4B*Pg+,b*Ubs:8 )6=eݐd6^ -Q|.8?.?g\9r;y+Pb—1\8 (o8)ΦgL;5Vƫ=w~ L9|~1%c 3iTEH+A.6~MH»s\j1#V`f>D#Zʙ6u0udez`[ oʺxENDž'wa3x%s+sHJF\Ȓ ;љtڰto)&N, ^:A(4#XW ώt4*8OV !iEJHEAg&`WV^ĢH`Tގpt51*vRKb-;vwuYz fʼnnY6Ϝ{!HL==VJѲw-m/kR+" +J{Lrte\m k^hJ3C*⅔=R3nݞ"4`ӎ^h<ܾ]-)㒖gJ 㯀S Hlп:ND_˞΍%EφEgxr1~ld0iGFӥ@F(|o $qGeɖ$9nnrNIBfYcuFFJlRJsF5V\9UڽM"bC{Y*B`pA@b#9 _RPw7zX9sI.KUK}{4uɿpWJ- 9_i,.m.TYUBx H<i`2ͺ - grb5CJo^܇ެRz맾Qq'cDtoٌՓ;1"< DpI:t<[f+SnJ 翏(.zu]fc9)<}LLI!Ψ'QU 3cI ֡8@ğT'jQс UsU 0F>+l=?(/d kH)#ڠM `)N,z. 7AzJUh\$Zx?8Qf2۵Xh ݬ{ {:Ac[@Ҋ$E˽I͆ l VKH?w7JK VkNCWak~"6U\\2JSOZ9_}?nWi훳]kMrk]şnAùǫ8w4>c\L7wQ@IfCr~.kG>{-6.ftz6M)~pl7>mJP9b/!iLNbt>$pH5,~6D ›h6..-wBCϋ-~"*8s^ 2S?R=4+*'L:G$d`͂/1&\ur**"9F[@Ǯp5&Qc>?ֿMGto|$"`/sf!4ټԸP{apo_r^ݭW5 }`_zswg79~l{^MP2Nh%=OԮn]cI"N*2 K<6I6)󍒔 }G } SLAZYNu*qUZ$P{2+@Y*I|nd&0$P{ J(N*/#$i#VׄLh'H㼯Xw1AH*Cl$}Z(5z-.Ah/'xA K=4b$Ece.㡛r,:SK' JOLinWWYh*:X7u;!JZx0FP-@`aoBp#8{^1dI\™w֬܅_ʫ[i.e+s}>qnY]D+(U C[H8 |o( -ze囋惘 Vl4V/@`FpXcQ {L-d"(eFAɴw67.rd9l7]PRT<]lѫwYqDŽO!sٔw$g4𙨟eXb6 #F4>_2Z䢲[g\TޒA$) ;V }|=9ƛe-5jV&9NS4ɏ:&~tͷ2[msQiyT4:N:Z}yy!TJ\(k:'"7 G+q˱]m$nu'_rӻ'$9F˧mr{bքbf1Ot>'PNcoyxD3S FN0*8KQPbJ~'^`ci&֤ *H5x]9aL8(U\-r((& ᬡJ0= Y_`ǤC{CG/ؐN H*h XćOgA;$^帠* lYpP+A+z,/ !XG!Tq=eM0 B4,_W*2djM)r8mL F9wh7k{Vk~=XպvvR=~{FƏS`|B=ɞ/'ٿ|MWz1y/l$-0 }FIt;8Meӱ[E_ft|MtMel c11.0?L6O5Ȓ1֟;LF-9FHe r:C3\,Awqx7ǥi?j2n1goWg1Wn2 FzDW0 +f6D[?J8KG'u!Ɉǿo%߰3y\.'% *X5YUrM(#2.ipq8N<+^Fb[bw~u>Xycĕ?$cv ^@s=Y{uhycu=y'[@'8%F 2`nw*=׫2k#آ?7޻|aF r2e5l{CY!,|){+t A;᰹zr/6c(CGg%L2Rp!9nhpWdfzgk ޹n~k1|2ZV 'pӥ>`V% 8{+r1gfY?iֻE@k24Vy 5^ P`srj 0> TO>ݭ"+it >UD AC(]moH+0d/;5,{,93CUKMۢT(آSUUO k !â:/LgӑnRaXWQ:NZ&hY%*(F[/-EbBU1DqIF?<"FQVAV]P3Z̟&֨Dz/]+NrK:oDlJMZJ11gn̻@T ݊_4׻ua!߸ǣOy7j;nN3xNy@"ʾ[z.,7mJ![ks>L~x*9BRA@klߙ|Ϣ! ZUU@@eN t8RF7Ӂf#U|~7ofC.!3xU73iJDoم19ϻ0Ϻ80f%FW1m4 d8G06)OQaF 6(9w*`#$H`9j,4CgRq$* qmH6NpP哀LR.,7"bI3& nN3x&S6w+~\օ|&M t/PcװW9 QaAq򚔷sqZU?Kav7пL|YəL~7 qPoҗ_4('S0o+͟T]T>O??;@ lw:$b)(C"sY5APAQ #V؈a>D_ =LqLJr"S8Hiۂˆ+kk7'' ;f"ʔLք~ XNָ"_?g%h+^+v7٠nC;x +tuSvx3-B}2Dnp i Ǥg~{9/f *&fV")VBBXJ:&71@dx*+Xu%j8%|wf*Ɯ[7Ww2fLB2_YO.q$໎b=,Chhy/joG>]p 6˾ud)"c 䔽<Şq13UI sKO7!RH.N  uWw >ZNt%BnSsJ7A u' s[64@[d=p0CR3΂>X,2D,$FE"xW#M z [WZ%RV*n~{>逧]]FZ_=+g%\D_W$ &7뻏>/oMX~e]^ "*BgO?M|b| B/35ş蹿so\$h@2a'xh.goz{rRT SmLti:TxNxe :G>zNh،FGGJIq+$qQ|7` ڻȰG JC!`B`4j=Md!J0DKG0IфIx:E- MF0IJSpsdA-Bhm --rıE.JT: i`B@%FbRşo/ I.<ϪuZxqsY/y"KP9# Q6|w6Y~MCj;zu'g!4*n㴪0 qk z|i&_&&'tbP*\P$\R#&CMZ2dEצ·~64*ZqƵV~OOmD9m/y:?~%!H>*fvײj\p1<CTTZ]1CCw[ ;LȂh(:E>v;TC*86NJbTRի~zk 2Sͺ>,Ḣ Vj@vL9N`{0˻^ ը~OeXp& SH* [YQq)1L4 lhIB؄Yx$ m#Nhm]_.8GpI0yz ޒ4_g"󁺐Nzi[G)(KW{)o~jHNfc9Bj2oX Nnp'*|{ Fkt͜2l}PHm278gcVI?rj&o+ ZZ"vp`$0ϴi˪,Npd[NpS6 ^.o pDs /"/?^([HWiT0E-S9rX?X"/0nߝ{ǁy_ ^ǝfj崗G{ , u2ze}ԣKFH?Cۓx1{krPw:pڛY/;^h h1l0e󟂹y}l<#$~zVy kd3V]fϮ?|vzbydu\-\Y-ͼQ[W GB[D1cz)/ٰ4BE.lM\K0r%n>je8?]If7Pm.grϴmdӭ1bŮ=^ *F9@~5̮$Ӆ o#ge hQ r 0IT o.1|f1Ytf1)0[{F}>MITCEJCOSf]=rx=ahN)'սE,GoJ`s̥Dה 9-i9muOJj!0Sڴ6^LPy97u+,CH4QƀdLNB:`Q:"Z˝QX #0-1?{WF 4zh`G݅h]MJ.fFYʺdK%*VE~#A{ME:u! \UAk8eomHX%ir4 .)u5KG\=)(J(;T*ї-LnBV`DTNәP*1RˤK(e i."XQE\cKe;քcQqXjR3H1פ$rllsm_=ؽla=DmhkgK_nQnd Ϩ :=n,[ MqgS#ZX,cáx  |USXOSX_zK %iK~_&I6=|ZXzDfQ*}٩]US!mtI+O'o?Y,foEPnˡXŞRRi^*%.zK3񏨂UXH P_#*hL.H"H>jAMj ´#*l j#J(;LKphY]HWHwC:*颂U UORqǒ/UOJe7R9ճ~q5 )iŶa\2\#q8I䧌9U$p"g<V(v;@ޭpuᯆ4xbe0gA/J`G3G8!ހ* #Tc\ c ǂey$hu;1ڍ4hVpY!Q')BUG̾{RYW}8/:kQد)dOo_S*ʪ.A-(*~K+% V%GS[[L viUIjFV`am:;4R^Єd\ Z NݶNv VDiqm.}"$a`^x&U5dD[)1LY0tjS'0rZ݉-λ|*]:[d}c\ e)ED. - x|53N/d嵚Zm9Pe][^k%FKikf/-}֪!EOSm"%~v  wʃm"b(Drb"TuJ9i8!X%=wt}zQӱS%ZjiH>4Rῗ%z)@Ik^St nbx*<"ACDBh TR Fjha3CT[wtW/wɳ`ir @_N4"OD 7>',W3 4(W>(pBx\j-rT<5AƄ1ΕbU:sÅIF:Wfεa!߹(< VA>wpH\\ֆ|&ܦeN@) Ң%fShhӖxGO2`p-S*T~IZ$2))]\ \7xml`hEHkT[Z!.ӊ ;[ {`f LkDVΧHpD+ŤeO5zlj x'50ݝ2Ԛ (HE+s§{|-{e憎N qp|oHZ'1j%p#iA={I^YS90;Ja:{ BeO|5ĒuF@SJxG@v Ȼ$-}oS 2Joij$zbz׈_*L8a;𢡄h,,0bLmb*/R@mב0B J^1@B `J8>{b Rn۫`R*v~u|}$ĭ }gr/!ַB|XʰJ 5 ELƄ4h *"s0(3 +) *rΌ'rhD:gR6=<`\h,:KQO(SJ q_?UF)'N ]Dl˕ /=yuryX ADLVk/o&q6_ܯpu5|y Bo2? r?s燛> xL6hT;~xr}'Ј$Mha<ܻ~ʾ#JGoJ%ᘪ(f #F(U KY)p%<! [E1|нX&ZcB]XZ0,L@k`Cof~N9Y%_7;(M/hrƴ*t\qzž\x]c\bڞY1Τ_fLL> x숁>)f )LH PDʬ{y[!7$K[D5 d:HnBH2HG#'!"ml%F&ТZPpŰiY=G(R s ^uDq^^bvYղ񵹻1殏r9,]~HXV~]Jh'1fs lȏ73w헅_¶ G 2 M ߌdYm}l65c^r̺8v%F[54*םsr6sRҦ*Ik=tpRY{<8 #VMnV Ôr&xN̫Spc˚3jWa6D ^Np8sT[H M,ptVS02lP6bZŭ2EB{d&FYG43<&r&A :4GGуtJ(/(\U +5_bdazMd<L=sW gG>*&aFA`9cLt1P-C1 "-}g > +Fb!Ԯ_rZbL1幔p˙yٜ;ʜoV6 )+\Jq S-sˈ="$H+ "OlNׂⶋpuߧ:f7w׋,̞ R S; w;3Ћ0& Hi5:QpYuZaRJS5dBU*tոyCʹWbx|9B(9&LIщdsH 9G35e'B XBTL Go1Z42u]b&am]nސװvkv mQAZݧ]Ci-Zܸ F-\uvz~^#NzM&❡,#^B^ž$L;G`Aaʹ^)eBr62L`R#LST'+)Tø-6ݤb˸:uƱD^Ѹt9!i\{x|S~&>WO=a˝h>eN]h%wF$4ur'S]E&V@[I\ػ$]yZb;TߑK{WV!qyr.S!Br ֨fDrz!V:ńOrʧWh:C2t(t^6]i[{cZI\w ^Oyt`?@g0_\zq;O %Dߤ_o.27)Ij> 'ЀO|Hvu,oe?+pf|ISϖTaMƵpwtv~ 9إ ΄xU(T!x%Uê–1UT_'|5,^Rk=NvN0~|H+E'UWfm㽹Ja;OJ)zz՞|Y mے(w§p !KpïK5 AEhTpv3/A4ArD~"%T4SBQFP=ZH=vfB4~]ʱ!E0οVA9;:- 7e.cQJ} $q{M2"E%-^T>[g^PiOi{Hݥ?.Vͷ6N]}R^s/rޭmB9u̽E9751{s_?Y3>,~M+ȏ67n#鿢F2@f&ڽ2f`LRPDQ/lMNL6n4@<|4a K?8hI*b@`I \x$6CSZ@U K%C/%j/}PӉX2ixR%4_k`{=R_/L 2@ Z*|qc>.A\E͈n BkZ bhkev~z^(J}8r!=+9C0u\=\y`M|^mA }SG((wsTE(%ibp \;xg%K+ kA8ܟ5Bc2Kk*尭 fhb`g1e1[O.J$R~)8,gdid"JOԄ+w|]x~.t WN<'J%~|l9.>M?x]A\q4uѷ]|tfSgBo/7>xM>xY}6_?|zN] gf?pQi飠̲wfX/4Nj`㧄3˻ÅF!.pNKoB|:cJ9|Mt; d܁V]C5$za[/h-XPP6̹Mak1Ժ)*lݩq6 K[ uv*#$ZcR%U r-r߃goݕb@{HzSbpмu .FS5C4p.vXʮ\'bw^F * yRi ׼UЖՀ+9K5W־s- ҵF:V,cuI*ّiWuw*8W ZcULD}q@t\N| S &rIbɼLEZ QFԙ3x|bǢ[{)'\%7 `mjD҇SJ2UU. u-%]ܤ‚e D1JČT""(bh"i$QTrwXU6y2⻼sޠBqsN5ŊL#"@ εV)s8n{Nk^ɛLj.5㺌KԜSpdR;4\ڌd3(jysfd.SĵUKm |JK! o'e}NJD?a1Ώ{U#B4kg~wJbHh+hɣԂ8T p!X0\(XXZILi*72j뇝*yBYRXY44!QlDK+%ÔR/#k"2K#HTLqqji O$CH15Vc2q" c$%@6QFlݷ B5c٠Bqލ6+0JSOQCpP䄡BQHh/8rCX .DX`W nɊq1r}Q09)U9KJb ɏ%& 1a%5w/.#(t!h@bp\5w?/4"EukK'utd}s7v;3ȁ}0|=08˻rzHdU=s=D{3ځdsûi ; $f*b\FDCFV]YM7 D>gEC(H @q \Y.rhƥmZG+[5; kSw>#a32%gz؎p[< Cq/nV dGpķNqp?d?V{飉e֑q:Zw}^?ǿYg8F&I0odD=;7oYZ, _a|>d$Z3yljV!N!% ۡQ `%xڸjHIDm[vU%J!@NaqV1~y:]AB!ݺ>G7g_=i^Lj_ǏEӜ.emgpٹ_"w8ݸkb>$weDԹt][ґ"H uE0I.0p*}WZoiAs Ќ+]f{iz<ƌP5<6dH(t(ča3ZZ]XQ]dֶlj}4sg3X*kiwҭcϽ /q8{(y2n~=B~-cZypaxBdkxUwytHП1>uwle>W6 ^0Ϥ:5h >0cFS5Y"+乚Z e HrK2ZGUF0BgB6P;4Ӛh L ʴH>tyBpB'+e,_w 2*8ŶV S%C29dY%yH5tf6if96s}EVc/]UGaϱzlӼNwa:}5`=vy/ڞwSG\p_!Tz4gݟ$a r z? h ]wO3;akEUxPFeaW=:^㋯'|;N=^6[vMviqDZDJD.ѷ%-61 jr{ε]]]ҬFA۸|./kGy ҶT/+זn 8GYrIxH[~-_ݹ" TYfkՆ}7 .:}I^-op{3uRdOM uMpޛs}61A*kcLXhU߇}"('SA 5NxӉc,Gז979B.)LWBDձ K[?\ h΍t _K{_ RPYN0UZS8h 0PEdGQ4v/̴O}ov9,J>Dy #&-M JjٔSqX%uZ%`Bm!My %T;+T F[bLDYKF6/"IjHQ09O FĂ:&Qp* Cí#hhFEUjaO.;Og) UPׁ# ȡs sH &#( B€:(( I-81?{Oב_!) vF}TwU~fY8_ѧ,)iIJzfH>Ahj~Uut]"j@N'PeVyPg(HB 2;<{bO+] JØ`N4R2Q n'=f~&OR.D- 8X<9 rNzIo( 6y/BGc"gJ* 2"l rV ;gPi˨\0||KTH/AX˃7$T])[$r{1>|}fvܾ_~wFonU?Ro\kq{w0zF xY7`Cj{|f7 ݞO&L6p`:zJŐԠAU lk%Q-\z\ KAOYC,dNtUJj˻_{;I,G1fQsW)UqbNW8">f0KB;.p~j||woo/f3a:]yo~GXCGv~d[b{.>l6O_@H㘶J˻˗gk\vM Xʼn&̗,;'ilDW'Ce }E<>ޱ8+6B?S\j힆iӓ6O2ǥYQogҟ "{> Wz 43Пu;qսhct+D @ă;:$tl`JDRJh7)W,$gu&%mU1ZP3Bc ^Z_i'Õ\+,eZxWoLHx F Ec؋d3F+hhNW}P*D>{Ma6D-*SRz6Į1TH9PB о66цC#B?uXJԌDe+-g (jkY!hVeA<ڮ/F@keb \bK4,7qvH*<3 8ZQhPT1SvUѱ`F%A +KƸiL/2&n2'K ?w|X~n*Ih :rZ[5%}R:+nzm;A)nCm!!Nm"LjrS !Dw>HظmjѳÛz?>zHJ+j>dM A JY+HT4`5$@@Oփ1 ~eFQȤƒ<[fs{2#7-֞mUieiR:P:'%Ir,V\b4FϘ$!}P^>-"Q{v7ߋ۳;$n[5FFrQɰJ^'ɫPӇ&:iPf/2Զj;NXg^A¸q8Bmb0C(o3Nj}N_)Ll&Gw׾%5-VRnj! O uv\18Ox{-K8znje;O6_gHu[^C>u`/hg{RUЉ]Vp j˥px^QD_|;{јL'9,mcƛ|4'=oE9 \wRì#qoEk馻nCq:CGM oncpoE[ҍC A }FvH/BIem Oa#On2lň͙ʾrOo>8^->xQt=>~y{u.^)4.!s5YG~wKPc-:#t>r[)Yf&N̩s?7\[d-tLȯ6֐$ZzIqB ʐM;dOD2?QF؂k|`хg[d=M@C-jLWu_,jDHʓڬxX2k;g"eo,Dv·2<7!_$`HL(D?xY7S܁$չe޳B4B/N퐵ZƦ@}.0_M%QjG72h9κSulm>c֓nlonk>Jvha{r L bXMRXb9&+Cf2!I ZFt\]Tߌu=K k{쎐l>쪾Nj/f8 3Hm>S9H/jބvT9*3XStU`z-Zˇ5r">(E2ԌOu=|v.%tƹ[.޴hL >$h[u36;B|B"XQh0x&& S Ǣ2e՝SpN aUU2)JY@$d̞`@d*ڠf>ЍY$ZS0ސ.K:JlSPbAldI,Aݹ'Z7mN#@2iϊOPOA,\!3E m@=52)V&d DDvX6Gڱ啀[@5FtpQ&e4Fa{TĮ|"EaW-y6 W%flZ1Ic±ͮvEy֥n+sQhsQ^x&{~o'ߑuTbO[1l`IX ЊkCp~{RFoF4'6i!( xeI$J6$(TD6-El 5Ve?ٓ0J][l?y2[TIQ&+F\AXcLWi_'Tj 6YH*J阳RBE8beĮ ")€F*m RN%`5Y[gL0yXID D:9GrC> y֒vyVE ,$Kzf X[CX柜" 0j~fYm0dv}<scV$+#$.!J) > tI%!nW"zLHxwA[E'0D<SYPQIJX^LE* 54MsRfj^NR." R"Ѧ=j#2OBR.Kt04FXQocJdY >Vvn `Wr`-"}'5ا͔ )[Z:eh U5"WSlE^7лn)S7QJPm7f|>^L`V*=`w,@Og 1.i ̧pϓ`勜~3 w.j7oTPOi3i;X@/%$̓[)iRdO 2aNT6}}jsX)i21YPRD[ @ %gQ(ZR6/P5=KGM({_`$:u PVRG-MB++вm:I kmYYׅJVSV~i.L.G)h|,.(EoW#jO=CJ̖yK2YyX*T7t|CG7MnC+'B|5ļӖ|W aL=v%M49t5iWݜ 4YTjy!4,2!R3pG)W0tbK{@ 7\|v*K"P)zm[IռV8U~>n6$) ֲ#i|9HICIxAqlU D4l4 ,foIfbvo&'ww5y.GIsR8z-Kp[,O5:ӯf$B 神cHeHnhV}whQ](,b0c!<(l*}·x|i^I%X$~ߘU童Q9IhlC{*ǵYn'?}ݿ+F|$S{<-=ؒSVSփX.9F( s ll8[rقrCύ2~ (A~dLCoqcD2F 69K]f7>ETX-SF aHn4:}nEvtWmHGtC ƧќjX}['9(ogzlfb7{j.9^ 'de2޼{zs1^ͦݶ;i䘿}}aj%$ XhiRcRi$/,Jp"#c"2MI@i@Wp-&X ePI(e,+@ Ic@2LJ`R:-0f PPyj ADyup:o Nh- n!z}V jgBoMfGT.03 q͚Ƶ\n%uF螔:! 9w!|s$`1-S48@!9\kqkz5j45b d*Sy DUzɦl+$O@n-!B06jVSTшFD 7ݥTv$j= D)ej04G "S%4S(`w'X&RR[kB,Dp[ՉvUOxA釬l+kY-7{ƳF-'"oQ*00ʤ룈V!ʐA @h;Rr?y:'K^,{5,Wl )ռT[%R" &E v͗( $.@)KӴؽlr?:{|1wrv`:HWUYK5^1E0 !2;?80e۵{H-pڦbHjIH%<1h4G˽?Ђ<{k Ƨלt:Db j7_ L>h Snm!<^H7u;lGb|xaGm جfܕ]uKA}xA@e`@̷hm][5CcFR8*&nq]odA_^e zt ϟʼn`PF-|v2e3`j?;'W8[MS; uN7̇y<{+o%=얻AϛU6KȟnGZ3'/Nʼ7/܏/AJ!PүThm6i^H_5N)ڥءC򒪫J(AyJ/:0ĚM* ⁈Kh>d>`^y@C6,xh.HJH{TQ6v0Ya8ҙQ[?,f3X[ xw sK|,瞊93;U EWYFs9B7+ĿOGd *aDTns2YgA>2ym~fwk QhLrqj?RQ\췗hABKJ~7iUY-f+?lrܷ@^<acQͭ#bKb8. jYe]Jl`wxjTIc iDC] @ ! e;2M<nBBv?>)%m ֩ ށBY6}ԯ۞5RE*#}R)U]d GQ#Η{ ln;o5Nǔk(%0UYbԡe 61 3jp.B~.k5%Ofn^2[F)B@hWYIC!{=:Z` & z3p=:@ں,* |l~[^w>gОѾ"e(QzN1/UzFEZM8BJֹY2OF0= h9nC tP^:S g9{1 f]_1F տzrVM=vwxKIjFQcHJM ũN)*IkJ3cYԸ. X7u&; wkde\md~\n/}}yy#,CUkvU~>˲K:RcH ֓_lGgT%B>fiZb7IUJD(;?#IHA wVݝ!Ig6D@=U*FTraTE:kj##Nh=+P#2#`j.`<̅콟 N?t>]3:5OG*)Q<"PamUPG+J8F!cB%NJXE-ՔQ$ RŜ`B卻ܮ.<^"]7Z(37?=y o~4~}>u% oL'5WfǵNXΓ$(&"d5GD(QRie2LG߭6^Ӥ҅'tnLZrү?9SghoW)ᑓ!ef֔,n}De=oG!:g.v}T 0^\!u5PSP0QP9m lPE(1D.+\^% Pպo΃(XՔCR5v{GF ޑ^< hb" :_Z(mjk7gʾ6k a~ڪ@&\mNLNCW۴ q 8 \mK% sڠ{{W&_Et `Ry~IMbh*|05nrݥ wJ^jpi(YO3W{+DѮc"NYҍ&iyF8,] ߓ1b y=>F 9C V̆\QpU^$'EB4;#G6aT'ΥuB/%B鎊!]Lj`|WϭAV YZSVuG]y f4 R)e*~?ǩ=ۖ^2JvYsrD$ DvFSw9C6(4m6V%wTs\nL$d,4uN7ʜq${~yv.4Mq"<6Jor1.(fz 5|%̥IW"a &O;`1:sdh/”#9/ʮgm=ɠ1^TxOBNT A炢QAZHp0)G^A=B1THHHRHp LG+Tڌ6·Qwg JΊm86co3j8c- ¤VJWh3S\u> c h XݢYvc4ov9h4.'5+Śf&[j.am8qgZB)JiDṬDJT:inxzs1^ͦݶ;W}w@a`/ Tϼ+͞ :./ʆnEJ6ڪUU.* bõIc}X&51 WL,1 JBeМVmUG^ AI1K41@Z"I q. MT$j d g@PRv]EOj4ca \Gbr\ uϭyʧLjx |7O$[ Y5JzcRKҦ/ KK|߸7.-M1-q."i OU,&8`DtJAtSP$ @R8!LS! n<2쟀L\ zvdb~=!ة_9kd9)t0ޝw/$%R($Isw+Ä?{63+~$X`nv&~)٦[MR҄buuTz `&`i(`g,>I)ҾOa+ڡRɕu !B9-jh7Ub˒"}m郘@;꿲" Ch0bs+ iе擛a2mC#y/E.5.T"52# kSLb4Zk*:fYFH "D'BKtj_uJR[[0۶N&$0HزYbJZ̘A<#:(WpnLLV%Za C[|h~O|1ƹGؓÙĚĆa^j^ck΍G-Qyd}jx&-s΍NJFYIZX!B5m1DGv!!Ҁ'w ~5؍~gEm^ȮSls0&#ٛ(Le?&4}/vx,\C'eF$ũ7Ϩ}fֻYi3B~$s~ĥ-#>=xx'#rޒIv!Z?gC-!l u !6q Dێ˰;0ph^o[ AGt9BۡJ.-NO#bV)ߗ.ݰ(*H^UPbV8y,9 0W+F{}pRxYs[y߹]alnLl on. ؃" ~,i6e+% c(oR]u5h ktV94@vKŎw|F t)z"΢NNpмɁc(<cy!!w WC& B$ii=dhjg%=^Fm"D JZ}3gI'3Ֆ+[ )kspcW&w%栈1Dφ%Wq {sj+Fuz9 PDWW_U—ՙ߭!84&KeHƐ* Taxkb(vSR`(tV W`~m !Ǫ\4tCLZQiBh*'w7wU Ս|ZOebcꙨчyQPD綔)HD;$UƄM4c2pAGUa~5}FZV:FPe4@LdR97cĕԒ1n^)l 8a)ԇh Rs͐+U&v/Ik0OBtksya*Ceӏ)N F 'ǩaR;Tw؇])2քn$(f6Y_|6-g _>GXcgT Ψ#=v|&LMۦg$v$`ѥk#Lko̒LJ7+$FwUkx?򔣞?8+ .JC nCpփك|VMkzPqP3N >P3Nɚޗw!.(Z܏?Ҏ8Eb2ϞPG:p$6SX^dwu1jR 0h'fTJ~&Δz[ ١TlfA'Br`h@ r+7io ä@ aN@it>KG[~vOTlr+V*7cY3!8BüDz P{ERyf1>X\Sǥ*,o6®)xooF,Qt3c_ǓHYI6ȺnEjd„ۇk;ݮ+=[&RcqAK^Ƈ[(> nq2 #z'.mCQ~ٲ9..|m[J:*ݗXLoIBq͒R$TurUaKQ#j\ RD'w:v6޴[DS[hs"HYʭ;?ncNyKr)ޔvw18Mr56_5ZM$acSjny$7NVُu.f'.СL ?$[8EPq:8k=]9d?=)F`_B'omT>ܻrYkY]!;念fU:v1+z>߳_F˧9J[]s4KdeyXhyPHOyw2muwf=̤OK{g=_>aH/Wbnj˘1q3&.ce9c\ c%pYbPXגK! S:S[Eɵrʪ;xu1GH5jI;iH3#Q :!䘷eݨ{R!(X ]k`.9_ګ m|\9ŧ<;Ψ+U"/GۻڥL籫݋5&3#l qKq7p[R8&,,"~̯]q~L8V:8 QS:GJ]@6ЪXzآR¦pthg9÷ JΦ띗aA&'׊Kfbeǽ )b)B bQ5PB-ǀ[-Є<*qr1sLv> 4_N\f@f \z(IlH`VPڹFk7P17:^f,jq \6S8LYA>Xƛ߉]qL!0鎶ϋ%ZH4<R[" DlL0Z"wCc*l Ǜhkĸ2k,25³F:A=xsqZ7Ts)֖oA狂v 2YL=Y¿ռBfޏoK>X9"py5WQ5,ϷGWTޏ X)Gu5[],ݭW&h/Uv2t;O 0"İj")<߽o[b5ϥS͐RfUT9յSq*(!W7ռnu''S8}`K" @ωT!jF~2%P*8W0d :J6}`yG5V5RtU-ݐnU購w.n'>58*X݇F**H n!+쨦^zC=p!4^\)0sjH-B Jl79TRSפOj9IRA.aVR{!:Ud8-@*: [2"% zƙs8% "C&"yS r5Is1H);F}3%g}Obվ6$n1[.);FvD+wݲ'ڐo\Dsd!xvSnNu1"$ՓϞhvkCBqm,S\ hiQ})Oؑ-mL(J߫Zp9ck]r׍`M-2m&I"=&&#:w IV6" LT[r'-BHncw~^VyI7fo/h+{;C"p=Ua7Uk1ObOy4Ǭv4a ^ag͎(f=6cW=6\'_guS/1foros G%*NJmV"}hɭT vk7ltXX6 4MIRr9@g,`SsBD[8(D GYk>5:'t䢫rٯ$#JQZSʐ&h ^QpM%1 q9&;Zn[MgFnM3`&6"l=Ͳi}۵Z5j<ʅ$y Zo0ci .qN[ D5NQEO/Y'ހ@Ygge} [V"8+ɊKvqnouB`-"^7jB<܁)A A%ƺe7ѶrC-YлP_=%~9m},,8R/+6`l_W|{};v`^dwjnU@~]գ-6R|gG^8'| #P_Eѫb~z?~g^o^E7tU6*drTqD모rBUtWIwbObZ9ZV=Rlqzt8{{ Om>7%))D)8y<5z?(-OHf7w寭 ys٣gs܀n&kmc9Z<˙@A)S"Iа\$?su Z{kz<b#K`nmCWW>g^- 3㗟暻6J2rOeFݛW\|<ߛ? L]]`u}/0rgH%>YE#v{#{(_2*{<-쿧x76&oyX8}jʭk:2jhTCpC1ߗA5$m\|TIyD-I/93N kP=qvsoOFt &V%9*F'VN"prj]#isTNyk $HYٻp!Pi7Ed H1)*;'PIeW)6[L1" k~M- 1 ՈV-Lkv́F̦FnW+Q+4:W),v:R@iP,$&TF' WZPx/r$/*FԸp:u6t 귅2l&$Ols1Y,ZEMiF!72+G9(҆$I)J&'0 $9CklTy 9+ 9oɅk*mIU#HIvǷfÁSפl-R ؏ xK)7ϙn=jV PIt(jˣx7 Zlkǥm[ҲDm,.WUsK׸JݢCYcŖCE@| iӇ=j)dQk_GiR72hbڐ]93VJo+ 썐3@Gs97ȏj5j63=+wᆡNOr-t@Ӈ[z1ɝV5Jҩ5:ݪݿutK͟<z@_V}~u)Kmܮ9.w9v9'&߀w1j)>CiǙO@q'06hXvF:G|s5NJEO&l]9g@2cN0 F}}mn{vW?*M((0F=T HKJ=|( >nlł­RQf!+<YF[V8hVJ)^SB D3z8x7U;HV vwwVR0nݞܴ5>ܷأ̺ jۻ(Bf}dV@B+Hϸ}R^Kgÿ:4o'c䪤9ctU38OP9?sH(o O\@íGSQhTkt"Xc4fsBӬRrvrQCلC Uk&/8+gLo,wZ{~k%}v5*tKaJ~gBHh^)0n9D=dQ&z&9M`7BH}.AQSS5;FY96Tn!B/ lR\gRa)Os:l ^O~M`T9NVML֝w=yog|gj%Pωդ٦W}߃~D'lBeU G]¡]"!P^bsf2Q̣C%G]0%*k܉I{ǷCAZo͜FAwt58ytgmx{volgDBKkY)!hY-Ae*2Dh4Z BYx $/%2R9 IZ,ڲ~CB *8V@`Yij|yv re0Y\3Ds.EF)y72 XvR6Zf `-eT9gW/GDijn+zP xKuZ`iEʹ̺YC7GPi-byAM!u㍐.%qv|\~{c~Xi)t-}Fv:/v L>6c0#p'ix9"!L cTN1֦Tl`: U7P'[?&aT^s뇯 M*px%oJgxSmF_Si|_?^^ /?=e˖^~zN9S@xFR6SɝVv^ٕ$uc{-#L1vh[8kT;'=t.h${-#L1y-x^7tPG!2v=M/ݜG8#+rb׬~\t6~u~ hka49#7G7X̍34.͡roe]AiNǑ\l ޽[FضLڈ+AE=\^32wls.MR]mA"zS6Nمj )ϱqi\f̄Yr);jJ@R P9z0um4gXxv11;e9"mZFbl䭮&ўۅj5>)1\ 3|GaZViF K"R>;\} h Y=khڅPAucjӉpzHƭsF: J8a ǨnG2FG1pZNCfO&Oz.yItꘋF% #uTα䝉%#Sp#PÓiP{`o( [QIh%p| :V@%0#QO@tRY6 Eb/)]I\dj+D-CpdupIc iK!))+)|JZuB $V.8_=lxdI8B%H>1,&(֑>dU`p^7!HbPz9|4cuQ#V䁽Y+EW(WT$8_ Ҳx]']|a Y0*KEpj,dm4F]`X6*JQ@=Jby[^Ώ@m]8y^ *G6ұf+J%U#W2j0%/<u-<\l75Ԥ%{޹k\n!@SbҖχ֬4+›wW܌Pt(/j Dt 9RMϸVT+'9[ /CYJc̸"rbÌbFN$AL~s`|bgT/;NQI֤h!#lKLB:%KJ]Iwcvɷ0䵚IG;|}]焉](SQISLº9bBNEN@!yՁ"2ɷeRM^J:=Ywo#W&i<#VsO7«/PSTl:)Б&y=g^{J #l[:ZiY7RNH>hk!9W3h ]-qj ` [N&;C)vbOBygZۺbbQ9CrH4{.Zl@g֯؎Cɱ%ْ(sd9Wiq3r83evE ش/XG%0s5L\kEY*Ã^XOec=KUcqRՖ:M1KQnj qYmj8JKuۊ>0J}  ( )tJ(|'!1nT[FX.F_Y׻_a]2՚8w,n,|~a;;-9rwM񶻺,r$Ht/>1gd@Z!ci˛o<2}f#Q 󂫦Bb=" Od7&,ۀN0Hާ3Br r-Ars8R-oYƑz9`żS1w6>Wm[ל/~u{]>5ip;rMEmǂ>ɡtDf,lQPM̞p8I=xǍi 8qd,ߣJ]5lzKNʢNb@#01oNi*HV8Կ~:!>)ȟSNjeV.%BbJ)Iy^*p9:,<)# :$qz2DX@"o@7u-A%>6Ԡ EJvg'|1:|"[([9圑c7ض^\հXj~8?AۆGh=oE.xSbsJ+"+xY쯿`@4~|g*Ǚw/Fj&߶ur_~dzN/f[?3?Csp(3憰0+5{dThm=FwޛWq񩽫ov~HIMn~<;E{)䂃]G~4G-F/z, )f^0ejXOU_Z'zyL9,zs7s#|WUBdFjv3>ϟ=/HmcP)698l8QRgN~N #<1eCԮQZPXTBBU vY,6I;H}VzB=vRuNѐzN)v_lyHY[&3+m)! T;hKPU#6x-2*!/XJm$.sȵ;;nM$\KdDd9`$c3D SPjD_K/~v#v8xUhbZvŚ+Xɋ>z+-=)3Er06Is#!Neݮ"b#E+ u5 .&,Ă̮9 'FjN5 #qW=_$MɢVZ}cKGN.9>vx/BAXfdE"d{*G;X;'>;vmHg2ls1g+IB<㧥5&(ѰDus&``n<6/9`ò #5&#)l [ l8p2cEΖ$;-(VjGXNc~fu/v`PN*Ը0ӎSiv AEI6$I{8(V-PZޞXX ?VUm%DWƽه兺 7'5s]rЌ6bJxGyNK1FӈA3FwvNM컐EUiQWufݧک!'DLaVp-#<<4KUV'!ĊpٹwOhtsd4= ad~Vr?Ag|Chg˪zݓR^Bws3<Wk}ϯ?S=77.j&xp,ď8ȶʠ}]F?ZGq,~cI |_(bBp܏׉}>4؏VJPh纝p[a̕hK%Lvd. ̵#n1YVRH8 )' XebE uOU1$(@xZ3rN(!xCqCo-'Fhb߅Z|b^?^;;sG?|}wwI<3[ʓͨ2==p3m*ǿP_3ۍ soߜ{EwoMLAR>son73|%9ژ#eVO0M7'Aܠdm+5NRR ҴK{zۜ+ۊ@We}H nmҌ}hkb Bw1$ۻ}dk-Ɔxh E ZKE(i+pl0yOIP4cq̞shsn☑1}aqlT3{Wplz{vz51zu VF‡gP-ġLFv2Z F~bQ3?{C,1ɤP~u3[N7w͹FD~;^PIZ8! IP0T I䨣]IpLnwA) ;^Ng( ,uByE,&C"zшP*p|H)dڝj' ;B,;c% mK,z?vV` BD|>8jSxn0{y%]:+|R R!4|vWP]~<j~>DjG$^\(ӇDRZ=cV 03Ŝ`k= *Z!ګ>}A5 1t?tbABp`1{9 GÙ0$Q(ԠhpHBRèS,ƂL3nZB5M] I= -aM+76~wp.V3Boo+%7L 3sd&o_P7<~?vzeޟjb녯3ϋ_ܜ< GOjV K>McH)_ӧ>d4kEYo\":Sۺl`Љ5*`Vf+QzV6G $kUg5FjctTy*9+m@9`qM, R:d9}4 ? 9#Ў6xϰ\wZ |HVZt<Qj$I'`(j+ٴ}nH~QD෢iN&B!xa\O4loGVh+ayVdS.6euޥF$ܤ" B"izP@)>&Y'Gr!wGH)I$[zP,%:^7e tё& SUIUpwy8)wD8#&ITT$w9GoVI3oc9';(߃j܍jU!OɐKwlQ)$k%L-SbJ՚S;kZ&Fk v bM? h?m|J Z;ZDSӘ$ɾPYbr!D)4(c˲j$0Q]d#" l]XiOiټF5DVt=n93W]Z6svmGȗ.Vnlp{ Eu|N2ݳWX~I4)YN{^b꩙zE v˗?5ǘaX5bzzOZz2MSn||Z\-1J].?|8%EApʻ߮X*&K8~Ewv}q#lCMqMo"U-ۏ-{゛gto[F֎Z1g]Qj( # 4D!fOwfPͶ.d.Q|A(,N)>B8:%wh)'ZcN3Jbk\Ј?VЌiIE 7 5#a3\^ QCZ#?b%0&zl %끼(fr#<Fe{)Iip7 Bڲb̷2`Y]C9Vr(x(vMl+k257Ww/XīeQPK@`;3_>n/}sl:UJ&L붓4AA|ϊ?R1p.(m@ǯUmuC0[3 QUS7Vux, 57Vfj'w[/;p-J^]u4DIly;)'hK'1ڸ6~_!hNJ[o!G( esZw,x"0&K. [(A;^|%,/9`|Li GIϖqtVV ),44MwaJW[UۚY%Ւڗ~j;jLHn֋@1lTܰwr%F(^jELB  aA E1'14qPaUE)rE+:bD|Uc=b_;>ѽx!cOQ\5^إ-YK4킀 ~gS[NH=1fؘp4N`#.`IbX~!>C GJ@J@O fmM+WdR ÙWwOsW 0%|n t`:#oVxݥh͗;'33!n f\I8ZE+Q ^,\.n<G]P.8a\!YQbVɀYRXԾXV59J(^Aw.TC(9p;{ /*[^$}S]j K1HAQ߂"u٨QS\v+k!% S:e)UGw96#Ƅ&gEFΝ;4Z2r?HY?\3w6_ŕO]O~ƭFf˯a޳nq\H&L _ҷ)lǐbj%@:US36G)m*$8 5oi83d{I\(\3%Lk ^D7QƹFʔ=02Jj>\ZMl+Ni)HS]@J>!}eS3iƈ9CElR$೉Dx{P65J„A DT1ZoOB auUYKwrg*W p!8Q<~ZN1t>zLBBIz1źm[>}<]oKӯInGv,cv/muwo-ww}"-oZvlؚ1^k ,+$۟r,!ߍYz_I&Oc8Ew7-2{\!FW|5*UH)rx+* WzW|"% Թ(M(@׊&}ib߶N`]R4F5pYoIP30WA\ \MT1uˀD@bL^J YM]3(buS%"ZR;#t/BVF\x &VBN7[|ɢg'T"a@%p%UE]@ #A@5UYS*kQfOz [/@-._Ϋa2 ;GiR?.;Zꠦ-+4Z8:ȑڟBk oMwwo Z tUAk(0JWIdΊE Ps`H#S]zO;_\oj$Ӎ%tT M$95jJhgj)̖'M5vYz B]_d/n>?Z?z_f_kk}xW03Ɨ[ ${ʮ@ /ܢ@hZ`˾0w˧ok!~jlRݲU, ΗhҚ ޳nw<ZT,ψnNp-^4кX!MZS8z[ ψNEHYnɝ=[/ѝ5Tk``2b5 # a1ĵd^BB ԿmC z!%q^E n>'’1ji'>C-9N9%TNn }g>6 #괎7 uZa~#:"4)F 4=LvNNvԮ c{U'棇pϥH h^tPGWd؛6Ҙg+Y%w@7֛Kҿ,6w*T뜑g@S2:sy78nѷ8a ;]Pr`TfV{6h:׸,r6 |@=ӏX>j O¾J 'W/U)*颚Y8!GM;YR$=RaZ:3%2{訣'Mu*u?@;"FJ] s)hxV49BFnDZ&MG3CsaԹBO>F\ Ojg-3fg*7y|x\lعsQ:W's3!ޟubQ]yabVݹ6F}>{Xu}B;FN>_M#[^Ec˼gÈ@9LXÄ S=ԙAFWxUV,JVBipZ/N.+=EM*CN8詮ᷜA}3k%kfSrmmߟ%-meQ'^6Ukx_MJi HzT0Y]yR*=I`-?[LȘ (z™PP$0]C\20v F2-DU*&R*+K%xDU`8+N  T3u)ɖ1PV 88S9+[[kKQMSWVJUUqKRd+Tw[^pdi  iy˛B(rk S#fް~K>1ep&&^0OS*}1B_V_ s*-)@ ,̀AtX-yG '۽jFORO9r)PE'\7;;w[E9Aއ(rAĈ[,MN1 B!ۧ]Gan(`LQՊ,3RZSm߭Q5c+G#Z\V=0%CA#\O)ul&{8ek`u0TNΘ8+Aϗ_@?Ѯ좉[&$unk9>ur,sٹ']{M`IkOnE/t瞾$v.GnGb$&1t+ELArxB 0%D*`TMY7T%.e5Ƙ)*k7=Fԝ{B尽Qm% $=k2󏟫 eߗbIK7Tz"\ܚHuRf:qn~[E\gWe F]O#I{L(&ٕK`4SHQoCrH;Bx(hI'"%%WZRVR5~%mt(W4~enTUSQTUD[leZk5*{UVWe% "U4W}6y'Cl*Ԛuʤo:Û(#N%| JF@Ch_S$1 ' Qheh_׫"Z 8'Fu׊w Inee4ܔ6)CP&}SioqC+и.yMS{t@~uw֓ϳ'Gcҳvǥ=i[p\>^@|u֏gdi'Om*ϒ<-|J͊nl|*cDozE-lrF֎{ p9 ea$4jyMOnt\ C\ m]uʶZ Nic :+mOWژ"E %L.& P>}1%Q7V{v<sko:]E.x:\eW֥gpuD+z8uiEF֥8*<[j]J5)Ké֥H Ǥ#Z\֥(=1 WğK1}'PZ'pD{[u{Ԝjܱ"eߦdҘ0Wڕ63E7_ߦ] K|8(KNeqJcPvFe$7)Y\PpL` 'mP >X14(őm(chP7嫖ay?%04wp#Ƒ#"K͘EH=\;$@_ɖo4z=dXynvOK{5z"YC.UU) ;f։iHWV.].M)#'ܯ At>ҷo|s?m>p-0M4 Hy /,miNu@j%3S$Lӣ/3,yF"hBiPY:hzG3(bƊotdOn@\LV*3ŪJ܀ 2.RfeyNqvRPf F9ْݐ 8(r nT2&Z2>v%(Őt>Nh.idO$e45f$i$I!<` ҙ4@.r %JM[h C/![żTs/y~ I1n Md9M(#O3i NhKS;;|ٴf.{m'rN" ]ܣ˯75 @=d315]B6i7WMbP=w[o0ԋV& If^ܺr*SEVFa RIem]n, ̠Tn,W :/4\ﻻV07;\Mf 1:|m-F/nˍ cZQbz;fmj TJ]I)]y_Z܌sa_?]j SM~e/}ܿmq}_~/uݜm]_'q|[qGt_:=ٴ^㎜% qGfYF%q^pzXԅꇀ~W84LJ:tEe&Z_EmUBfe5nnTleeM? c{s6 9`?뀭T*dU;qN/}W{fvYz RviҊ]?}7r22K,eܒfiaFbQ8c [n4͌8awW}j',dfr43ԸL,[Kt7bD[5t`FPM7HD!<1{~xIaJ8 $3Jġk1iXGHO@"&E48+裦6Ǻ %Ӗ FiPޣ38)ӆKg*m"&}vIe^w@@&\osQt `_]QS竧>\ E6i|ېJdaMd5j{Vՠ{xKY/4 euQɬ, R,v>0(* CZ:|< gvY pGnl'xy!u:Jf]^Z0;%d:`}0-HR9sd)<BKQ8F%V=#Zg|XIkĠ͹nbJw? ҨYcDZ ZuۂLXF)-뚄VR ,ܪF,ƌ~UY1'STF%}C ƌQ2͵HiA16g=@OUTnsJ U!id &0Y`qO!JJa*$5IIn}=Ԯ-9Ld$|Zc#TH1G{"+>+K@'KŊM4߫/PTu}]ĉ?r ~t-pkf :-ZY3G9d;,\\ɳk\ \?\V]׫7Fe]t/|EJn_;eV*~*9K!X!y<^_Wwi}2ۧo2|sщ6^\VT9:a:CX(Vu<>ǫHl@'s^JlY+{KQ?eSܤcp)Lt /yt} Bj`2JUfZ?ڲ4d 8+rEXrZ(i=HTdA{ۚ!Rmxp/ LQha:K9ΡDc @E=H5 z}ukó4Dk/c<'+)$+]gnCarI&s 5n<Pd!f)Пk=Ko $ Xz ] "B~Q*AgϢ4+S. ko5ɹ.)l]gK(HnUAwMe<3vgc aiK3b $+54?ZAMQ0&Cw[j<(OX a@0KlcR+!-ek2\\ Fڃ .:3WuƼE7&Ka`i RlHR~i&7B]pXi|iZIw&iZ uw`ͯ_aV+ǡ{n$ދAԺMXni=ѡk7B- -ޠNCsht6{&dqiqVJ&5u8L@0Ei@J%/%51&ʆn-U=^kOns>d 쭾~ϟzXiz"3^QF t SZorCjPz+Mmt =@9!Q0:zƩF[GYΡ_%%JGxr9_`V/sRzA ȱEQ#ĚuhQibC!GY~[ OjeI1Ӡ;, -7G~YlsLW˜'Cad- _ |J3މ4LfCZbXL/P KN䑮m$xzxşMοnF7x]K⢙o|~@˻뢺"U]7 {~h{ vn"\hDUBʬ\\* YWEY亐\V_fxņU_.wn>.6ŸBDnՅ\kiݴyXmae#ٌz ڠv]{7$52Wҿ&.]\ZeҖu!9g˼FUS Knz,jR>>>%yPQxחd!'%]27s=0K M(}-Hȋ@ʠ 24*q=.18.~0ﱏj`2?:QZp?]n񲹻zUvk51ԣDWsqw0;]dIAMD"75ȜYmvc#տ] $'p%r N]_P) e?p\w'~؈Fo*6VOw/ pp_j̋۷onfdg&u0PeXw r8׈Xa"KL}LzM:$Mv0Irof6u '%GV-$ǧ#M?qa$-ux#K0oQ<ލTtDmkLc텯Xnt;0QHM@srרiР!8', Bٴ;OEKL.:gC/53#mht7鵒zG6#ne{ub}m;jD5=)e?/}"Jn^[Ǹs c~l]$"qVM4?vV w»Rl)|RwqSC.zSn8Ӿ!y/_/E{xy[5Rdjd7Y†g߭oWíM?pZ\%W}|J=b=4=2NFN?غSmtDtHO2FZIZI"#탓JDmGWZ7R`E**]>~ޭ\{W]RQJ$ PXzC8 ! [Ծ^<(}qjD{o,8KtFwH_ߙp✳+!ݻ%g_u5{4ӍRhECK2vȇzHy%ڐ9#!%C<2qw<7d"S OW5}E۪&3 Kb4E^rզ:yTi9ފVTv$[Ah|՘ss^{F'P_ s09rbzO1 D¶"ypGb%?4GQ#MvR"G;h[cBhl,ZW(t^*Y aأаVQأ[\|R3hVFG&K$0IAǞTߥ$S32]</ -di:iJf (csjRϸƲfys>~^z* f>UO {fjtWP|68Xzv+9aE1b4%ov gfŹ z`QLw@:u*9K>'h0s5AI8ϧȓ?4_=/:3F-gj_Q^R+oT͇].|V $A[7H'WdE  6Mi4Fw$'US"l.H'"۟VtoUkJ>,{&4"h vO>"QnQlC%i.颲sTrD1SXt`ֲ`⼏^tBM4*c$hl("i6 dL aiRQ*:UWZfm0O@RFŤ I`*dٳ>f\/emf݈R]fͩe63W aI -6\etDSI| waKB!B}fcP% ZF_)d ,k%Hkru/dwu}kl2ː2䴔Q^RKI}WZ .IW-1턖(9ҫR4Nn/oh]ڰ"\MMY܊V:y5sPRalT(*hg:{wVxŕv'E䁐`w+%p軐IU8ro4zzeD'yRO:5*v ཡ"]\ y;>FCkRF@dz1HQ_ }$Vui%֏}Z4,sk] pK=t[TeTa:eᥬ=*MSJ|8P<["H,Unj9{,)/aቔpR'o鄂wSK3R58W~\ ˲LJ2۟̇hNaװ+wt ׾w:2q>"KL1JSwbS;uJ2.yDD>|v+q$c:DŽi]ɘ:5 A\N^X d7DtJ.ɝJ ^4>(8 x]Ioຓ )AR@dVBP2_[~9oŷ5,̟'B ڣ2JYCJremKt˧O꘺GIz1^ -sWvoz׵FۺQ-V^-O\xWLvPŒ<~^ȷM27nD=uKW'*ǟ*\Nk$F|gfoyf)`棓a#W5")DX>R~tJ7rZ׎x1q0G:܍69ls3}%/ JӪlb=MѲzmEydr|ƹZkZ%8C=]}Mv^/"xͽd5VsE00\a` zQnET0`Yo/Φ-t2},$)hJDLI<` @a7^V6wkX*g}`*D܏˲29m|aFjRu0J+;>FLbT[5=R}{j!rBQMOtln&y{?FHJ5!p D48c!|$}^RkSS6dTY0R(<1g1|!-C1{|{[,#Ϳjl2շa2^{ D3(~dvjWo~)&CEY)P |Ά~} P#B!/E" |o0GEE7sdJ]xC/pg`9i8@lO`D, ѩT`"F㔩DDqIRkBT#T Yb>:H9gl Sk l(02PLR E1P$ǔ(㩌)K]=SҤ'G2Ո^H=Q#!9#MBbI^1"2&6C$)7Z! Il* c E" %۶hu^n*A@suע5E !$H( A -," i &6.I8&0F(XRmEWDq$+z8PH$Ljo 2u+"7'SJRB^H0;.z"a.J1Aas@d#X>>0gZ ~":)E?\xK;XVg2%b[bɞbiPEjSA5$U1D'v"2!0T a(bVE껢c{̪bn)-,r93,s_ք,vz3Mۑ=3Q<%7aqκ?2F.?K<`a|l& ?~齧_i:(CfWR "Qww{D]ƞ^6RrZ 0|gsp7Yu凛ѓ÷&6s*U|O(jL s?^zR]\x80?FDYU ŝvq?U/F=+W?yѮ.BNrՀB,zvOlz-T<^}X'O5Ėz'D{7ޭOa1irVQ6UeY, p$ŢB:WUvWGA DJr-o;Bh MMhO5q`zn D].4PB83S[ϖZ!~ԵcKMifeT+ %]ݏ?rqq՞Uu3 R{1Cn?fRcvaݟ!f'ź0(~N'Lnbyep> sKJbx4IS\@zSu1bojص}V|ݱb婥jD5|K$e8Vƨخf"!-t*pZR͢`Xe0v&D$T=7"xSnyi=𒥭*VָO ɈHz'71IpX"($8<(q&1,VP%s#cճw6n[>+AfW9 Gv7u$bKc ؿ8@"k\nn”qͽ!n٢HëqWsgRw|ggw`So#R@ 2qON|$-o;6@9hs)IkC]@p0BΞGB@.5Pa3G؟9[r0bTp`iYãy)*{P" ls:>Tx_.Cs[!6ަS{s?>i E1 Z20vvb4/e_2D,ߚ9E`7}E6}ѼUe@76|<:%rctJtJMg# 1F=~ Fq-0͒j%|=(ߴ1aڞF)rD>O“'1BkwDJ(1=(8~ ﲃ x. W s "uʈJ.FbPD5a4)QFD*"\Mf(B"2CJfW’qyD2lqJ.aYȀ=ǀ`>ɴdjUH,o~*,^3 I@),3S&^OlB퀟Oq41ٞi?d{&* / V<^Gx~Bĥiz[Yn|Qy7:~sn P/R[Ha "^5 zz٘*D(>CfWaTEF0A>t<\(G=Ӌ-N$[^ԗ|]j^<{e~KXoаM?Ɏ r^,ՋY@IYvg*eG?~RpZ^0* hh]ZC뽪n *\.THڄ 5/{[T+䲓+I%%{ʻL@H)ޠwb,XV8$4LW,d.+Iaj?gm0)P:xW2O!RMA$ٻqcWT~I*q93ΩSfv*IM$8VF<{@JES2 -3vm6D6k4nVFZoòplۣMbYu"n5PL/e!@B^p_]^,;5rZmc*2_cHYlV?F*AMq+yQ>[z֣|z.ƹL3HTqI$nLr2K- sD *,!8MD6b<\5}-k}j5"̼u5X3կnuݨYˏ1\;FJ9p{O$3y2128xjNDXK5KYP δ?%M߶X'ۻdN%o_}(h!)&\kl9bUkQ!#=c/vX)wLCSWBˠ .X)3!!m/?0g7υ!< BeEBH'0UgBP"%}@V4 $I*6 D<%Gmo{n8Qsir~UM5L83J9uX4ֈeLe*c)'C!N! OR=K-y`dQ…52&xi͎pϣ|n~OiSO?]g.Tzk~IɔȽk0-Zyn/{#zƥ?˞4x/ct_>WǸ/I<4C1ѩ+.K D9CT<;7nVśVHOiI5Z)nV )7uYu2XkRIDK+|CRԤF?XkR(ҮK2S#Bj*Ji%|)7|O ܍Gj5+[ Ld;&Z;F̋_ EB`C%6\Dæ"Q}+lTI9]=xX<]ÌฏS‹S3N6׵5[,(E?-)FXztUWTfa>E2dr:=O^I'Ѵ$j5o|+IV 8Ѵr{.s5 hNn_޿bl|.VNr;d7u1!ۥk:\)M xjR3[ ցu/ȪR4S+l??jcI]Q|dx EMXUyf%:O@a1U8Aie~ha%坍\o7i ~C(J\Lg(}yP7,;d722wpu;u8XjPu>wl!-~R0wƁaF3"8̊D$)[ )o $h_eE (zys\0:/yB5848VØ12|j?:nK,EbΫ;-V}7~]=&N@krR'0MN(υ<(CRSr,94}U's^)!s6*L e yC-'gk.)ƜR%r UfRF4`Fb2՘e*Q)cUT#ı&H!A|Ǟ &PJG2co헐Dmmn\L(Jܸ:yb F`il65娐gxmBW"> ^ .Vխ>̦j6Zn^L ?.-5Am'ٯJ02CCѽ iWk23|Jo],#>)Y`:|$' n")BBtQYhhVj9Muv}+*=^`[t0C1GџX>j@cCWJYgD+'9D ퟖmHW>7]כvjYK{B+1Z֝c,F+ -^R{L4X6:pv'"qute\?tE qܬ7^(lz|k-T2ߧXjԇzT!9+:hJډE孳$tzM9əڷO*]P8yWܱW| T`r[~e;,M]jZqlƵcySBvqX#WM=er^'stF0b .z A zQOn!'l+/$QM=h 8ڴqq&VR@/IRB\d4E^ 6cj'2#T͞>Rz_z,)FL$+IwZ.yF AH@P0!S +hR)IJL2lz= 7}"00L0@  aMqFH2Ɣ҉ѴNI3KHrOʈf:D@1}`#"1{+bm<7R(1Q2&abpC4ϓƤi[G<<,0q )@Ǟ=NF NJ6>TB`צ!7a`j[STx Uۮjyyם6h-ů%)vfb\JY[b^jyI 2E}3צL4NOӊ?uHMZsDvПG?Og3XI7b$nYS6- B{F_Akm;~ 1 8(~@ʶY3 хGOz)vfˏQ h8QC\㞸݅ L 3f%6 K"sVD}Yn@VFpv-*`x6uk:$ܛt56UnIbvI3Av|<{(NB{v|[0慾\\^<Ͷr F2GȌ_G8$iu~Aԣ'L͔XI8ISc炎cs2|o6MH0![Zڃ6Xw`nmC1=̑C^Z7`cłmpab% E$k;h-x0 Fml 55,ڂp*.I)<ĀL >DU]ȀH҉J`B!h%Sm.X+MsPJ1Wv}u ;cKrB4RJ"tDjSLdnk V(өtn9RWk)֪lYXJX_ke|x>3UDZt#zLm_IuhYȈ Э\;SNs]9R3vT,0p0Q$6WD9 n+my>۠>\!uՆiVP4GѺCO9Kmt}Ñ ԯRz*A'iչ}}8'fm̏{kn9bI+"*8}NA\UWNw&&3)=nrWc .sW}w=M{o~Bo.4<:Aw;we'c' ġwsŶY/nս1a1-mmnpM+dC3A1&Fc|!Hf]oja /ׯzty53jL4 cI!4MYOm|>LʠR,C@Z2,#*5LEbZSz5Lڅncq.)愷Jw]&GOq.HA "(/FT#Fe(A}w$yJ fy>&*6s:) %fy,Y/4[PCf_)x wկchp_=8RLZ8]MeX*}݀Bv<*JBbN/k_">hEۮ|gN(FyOZ\Y-Z'JKHN%6b`hYA6rO0e"Q5x&ɜ҄\J,xaf2ډ:t0ԑ rgeu~ؖU@'o!%2T/=oxJa&K$͜Vj8P0rH;LT2`D1TS2P->e*~#$BqI+#-8G5cSɮ~”xnÞ 7o NQbW23ޭۀn #F}/W\uGC5P]y^0TG\Djt@߻H-suOUjUXl0[%Ƀ:_*th0,|9I} ?wVx3Txf{G.wRgvY]0IK<wܦi=:0ԭNZtA! ZO:;UW&-m$*MSD%*mXr''UbK1NWtY,t Ւ*.{{(-@Bk]{/@6C {9]߭)>C7+7$㩭ju|YЩg5ɿMGTPpi/pӥ֌RzmJx[VթP7tB W_R Nm7+VX\w:lߟ ݝbnB3Z|WVe[N: Fzn Gu22"Uo\fkF:w[9bV;]l)?UIYCUNI*{?G?nvhb:uQź I;V?0Һ5!\E{uV(թJ팫)T֭vFuk@CTd+#viկv~qW"2=(\7%E!< Ftn%{GVʯL/3R.\v ?,3U -"DL{ՌܾWz#AmRFr3. _]᫁,(|{ \9~aъ\q5L`+$Z"NNkd,u61Q:͘`yJ"I\QTEB)ȡ#vAS2tv~3n>}^eOpH}*2ݵ;޵ o<`amsޮf`Qzۻ#͘*Fz5~swTzݬq|nĐaqrK__ bd2~,N0g ܼrQ*u5$7=O2Mr v: +_W0)< R^j{h6tn]X\F6dwKM*P\*By 9ۙ}䫫=MGj I#i&G ?~|V@eI;u4`X+\%C'PJ@%F*-$ A␧.[Rvg#/f ԦKd}=}[x'lHDq 8l_jܛ2"r|_s6_//wfYX_eoN#\F; ŽJ0cGMI3eJOBfRQ q:Cm%&!VyN6اl4a 6Dj);R $`sҧ㴔4:D]Ikir[Jx/1ˌT$QUcu#tZT) ה||4\?,vK+ @Z,?Ӈ:@7r^Wǻ"@|yhNjseCNr!_&afH჻~BW2EӁg|d+xWl=*铂$m+bϪ(%b1S\4RE1o:|q/sCU+F[< s9FGÈ xq=XꜨ~D4Ԅ,˜e*JBa@,Q`F,١(rn2ELS`ϋU̍R|&s9IsW!o/Ϛ@qsh"ee 'Skչ*R^5"pڗ}ux_(ߗn|/ Jk.1V-jKdxx3"K@)h+3H%,EE6u2yu$zDrؘ"2GE*QCNufV{'F)D0'((wNn,K:(ђ5bҢtjפND1`3u֤-1H]S9 (s+#& P$]Vj&[DT,+=FH$CE*J-\~MF!AjM\͡|2PC?[ WshMLS~< ng/F;6ޔ^ s#)S>JF^)/cBe&s$Pf:c cq"L !FpBď=tUh'FHG~T q%-irQRLگAЖ۩Ҹ ~f+oΰI}ԚDoƏQ^R & %j V܋ zzغݪo\Dd16璱  bFn73Flvf[ૠgOn:ӏάq:[>hruNchC,x@P'ґ@ N9׾#yV'L)gV.1[9i$my6Uj T]ɪ L5sr*\ _nis\ygt\NFE.|8ʱi㯤 _}\< _ ?59h@z fQA@rZ 8&SAmr5(|΅.:ï Fՙ"a( 0:aA s:)éfJs02DEP҉q\&2aƏ kHY \ʶQ$tZUWrpXHdmOkh#c d0,&*c&EDz3c JD B=Abm\Շ0\JS-T>P&n]Eq4ox2/xhk>NH.5D#M;*`C#ӾȜDv\U:vy/;V|5v0X%PqDMiN㦾{[Mh\Hz6.df=eԒ@^5m_U v6{kKGݣ/&C1:ʇ5|l.xd.@@IiPqdgi4N]qddT(G%jf~;_; jE6fs{vF泌vf Dc,wS;wS?we]lmeWPrQ&4K[_l8fg|fl0vU%Z>ZSZ7 Nd, ^;*^xվvL8,OBǖ, m  ~$|"eBa(}SP*X@t! }nx_Bc?]|~et+RI:ɚNޕĵa~!lAE:R~FH)qΨpRJ6W _˗;rlXxo+9XwNxz?0b iz21OV+yN0Rn&q.P|?P;_$Zo޿S0Lp`8Y'6"es2-Sr)ŻsLJZ=wa $Kj&KG'gI#r T9JH.)G,)mВH05줘6fUbp)X%gz*8Ma]lYw2OGSho̦oY ɾ o5{O ^;J qG4 B=`B@F b#\Đr6m.R9cȺKSĕg8C cͪZRmMgN x~v@NyKAm9F8턾gnKdƲ3DVQ^'z)ydDJVCgg:ߗKtDղ#LmfvJȆ:,ʹbQa/t Vޗ.2/d˽P"|*9.Y8$e}qW0w\|s #5i˖nFc ,L2 CL;oOqY T j,|sK(Mb/vôD|jXƑph\&jW+e}Ҋ,1Ƣb/>yW9.˨ @TȪpqSrHNq.imy[>POJ8ʢ @ުD%6ǜkgJu[?haū3W>~3`2 ETz6W6Me&qQG*|_p"cOEA$* 0&n]\`3l ܄wS}N^{\>Ą+D#vW]@Co!&73?|ֶvk ~X6uS~z~m)Qi1Gi`׻Oc[LGY!sם';{'~WR8A_:@;;H:d{Z %Go}oI]:nHrG(i?E_;{dvN%iHV!s;)MLEEt~t,(BgFTݫ0J%ýBnZ='v^5DtU*TWPS_}jī^=>/j2u1=ugGQc(lf7V՛Q?kIU20Tũ 2n[=>Rrf?jAEY^cCJul:| v z ߎFiL^g?Mhyh. ~Ÿ&xބ0џI(e]3zBufl٨~%/h]9 km}ξ>iGmf)WiOE nMIt3涇,T+l~3;F SҚ˭/FP6<͔H檴 Gp:$&YٳY9Kl2з~1eY,UrIY.@GU栖ܿ@MU~`X};m:a,^w2 foVقU {qlq<ɨ|[xnMc~g qF=2 B RcIOOńR"(Bsɣ`lzhKz'CY&Y0|7+m:Y"kJiQ@wY޽z=zcߡ߯(4sAvohkX7\i<I8o AOtƂ7"=&[ Fh0B6e0NOCpfZ}6RRDjfj2Vf`-:R1jOÇΕV@R(Mܨ^rو:(je.9uѺ)g֥l%P[ZB.RzRv)gte>B&jɐHYK)nR3$R#հKʬ&gZÉKk0D"_Gak k cĩ /ƥPt[C &7fO'5izsJxHqӁv-:]u[Q[i8nqCzb%OiV?^0nJGȻ&pvxb5"ss5e=֚gľUɴf_syA\r$r9y8oXLG=<K{POqF~<zczC$%n`(sw61؂62vҎ_f]ͭcfѩlOX?t[~lǻR̀sZj$чom"Np]wyS-(gr65N:s$fi1&菏|5" `UA(9]O"ɸd< O~ JtL gQHXdᩯ<Nڨ+a5hqwZUq%%&HfՔ񢅋No[+y7_\2]|b>\ƀڮ m.su!@AI.?b±MDLvAkB< Y˩ajv!e^~iD&g0^sѼ7yoDވ}^4gxBQ&#J4d+@) zQ%"̠6D~5K7martlC;{f"h'jeF6[5jtKWz"ͫ'.d,'l37)&ncĥfI{ٴ nG$CxYAOB! a\D$$@J􈠁ǧ8E.>'IzŨ4kbb,?G#'>IϥX k5HLON~@D"f\0:!gAsw^,,,9ۿY8Y#y)=ҟ~ B W0T7#(|ǂ e=?rXtd2(>=fDt(I`| )*#116#nɧ5 0hFXH p 0ʍx[JB f\n'X1L 0ZHwk2N1^ Xbam7!)A+cs滛 lf\URx!φ̒0Zs醝w<0gD}tY1_9+u[˰H%w#,_r Lwrcd3=GZ&DM89~v=,'~|/ok|SKLm}yq]->qQ \=|pYbϊY^=KKv&-"@ DL!G):%Q[7N@1XTN1m@ZڥuK/l nmhȑhNi2zsӺr["5|+SݟuK ZА#Wu$3@wWC/NێnkGB 0HX,{]K·w]FFݘN⭖vlnwNWkהv0,aרֵܵd 8YPhT=FHwR9ڻF@Xι[\D?2s&|=鞤^Oٿl7S+hf^|R+45ԄyGb%V&+Z ,4MErmd>*dTy]HJ1YeV:R<Z ʥ1RpA^YAjR.sZJJIPYҘJA` Z|l؁>v$x7][A$/zu]/'>e=OyXZ<̓5#'b-t5嶨g{.,~jtuQ67`p{ I(oBIxMC屔1I(jPY^K:8[,Ŧc@'R$auW(m5zs?\UW:Q圫RYw pRXMS_fY^dka)gTRd 9*NIsN;AgF~E/zw4yzǥܻo)ŗ+TS% L%nچH✗Y`}}rlb9q7yx*HL;x!X*$-+{W8x#;ICގ8(I:J KZ&ͻzH40eKՌ [6~yW@UIC]= N//|4/B9b(ԺAIya RRA()sWuW %].^zƘEJ?rJZ Y9]!8`ar(4UL"1D\TcX X7yz:uRO %p#&ֹP11+$gpHo%$̈́,*dFBj !\ ,Ca0c\QhiX+hkQ`3&4p^hXr,+c&q"E2 *NJV*uF VAiXeE mumj:Zٚ|zyjFT PAhɈ&93*UP4B ЛIWcɣBGʱ/ 5f%9P'G6r"v ?h1V11#N#%x_ k:G4:j+zirz)q=B-x.2(v*x-(ͼ-607܄Bc&歎!66"I7!9360B ~&svNc~~qs89]snes( Ǚ荍vLӽKq;D(3{b'3䧤3?]I3;:! ţNt^]ΘI)jNw(bXt" +ho9+a/<&EI}s4pJ*As%CkGЫ C@f;:.s$kTʃkČpΪ:*ms8¹ 9rM)᥉zRoZ7ZZTN1m8TSݛuK nmhȑhNa݀QXTN1mʔyU_][t@ֆt*?x"|A} _X"eo,-)5vL-&~^?Loo>/~v'.YGxc?;9ܞ/ҧ*%(rZػ0C UwR=(=SJymlRyh8f)2udߛ"ohPb?P2R(![ xC)'Ɖ<ޓ6#`a|h]A)@W##j=nP2M˜LrE^L5TTe4͋ `LW^=jj=9d8H?훸 Ύ5^$CA/L8q0}(7..؀:S;6$A ə<Z]Htƨ Ģ.%:5!:A)gi9حтK4FoD`ZT&TĐL E# !qBj"i"BSaU frÔ!TL,UVkzi}2+:Ă|)y x *!Jic 9G JXH@YPˉLm0ՌBcoh<L+r ԦPr9jXX<}1Ih!(9qOL®2Ol/BZ%7ẅHj&M4|<|?ܦr9#j:#dDM(v@L P(JZR@BͭI0ޥb$FБBI#@)$ 5ubֱtzud_#RɄ WW=|#k[9"Π1_hN5e>, W12 T\$CjLIh{+׽ ;x`ƚ/Ep xn7tK0ݖwIKW{[9O4-Ք^}H?5GݍS?_)מ|LMҞWn݂T̪=?ߜ3b3%:]⳻>'d§laώ݌/u5ͮ?%1YnGl9lhUSʋMj}Tf9QAC*7"[@gIFY4B (aK oM# ]\؏sut+äClU/5-YA+A4F nM5جDVE˜tY"d'iE]ݳ7;nV5|Gg $ ut$Vbk5Y92ƲL8+|ؙCcl5mz4!<Ԣ|n7 SˏUnu2uŪ JM#`mNCAǓpXpe 䗫ZW^KPjt $^v9+@́8qx%UWT4TD*93M+ #%* Y;}t n k= JzlX` udpXᰋ})PJHƇN|/90`tX'Gfi-'RfQaWʱ`/T v3tK랛IE@B3'u0R.9 Й H`t^_Pt݁e=+e%gRZGN50/qп<^AEgQ8E~, DXЁҮh٢F$D7- (.XڏE:a( ( ,ʂjEyZf- H]r(,¢`EAd߽T?@~t&궨r/Dpeu}twoH)ݣrSRoC@{kz* ** =|46E#kbO!H_oc~S~Ơ{-T-t2bWbz8W'WpUy;4X[IaDlW%R#Øzt}{qQ, $#uuv{"~Rq'[.f 6̾'˽9h,625C8qNHTKefT'C6~r8qgP&;S~%sSy26u9c@9at?-psr*SjGu([*BT'u6deI-S'к!GI:Ea.uD1XTN1mȰJY$֭ 9rmSLH 8hYR""gRr$t>ߠZ&]2_ؽ>\o؊ϸ<ƞyRC>ߠZ2rFB@rvdlکy/G>/+ `7וݔZ &K1kGw W=<{m8w/K>Y+N,.w ^eWL2qly~ٺEnMlmYb)BA6E **Up2[ҮBkKN-†$x(%~ [Ds̷!N=؂fZo8nʳPi%fhwEr`OOoLS!ZFO̲ 7aB v&.޻;K̓u[.7\,*יmdc;ŵ˫O/0h>2x;?M;;Z7BP>-Y֑ueeP T0?{W8_4@\|MRwnKݗ" pFƞ彙\@6EI(Real&O7~ZQr)::֞u+Y1o)R@h=\sw8\8NmOmM@=xu ީ::nt$ShZTf'Rt&.ȪPQW|ޝ3Kct'nt8#X7t*u!g(l.ȎSz4ͥg:Z'U){2|x7ÿ;XmI3$ǻ_y&).τ=8M^`eKW-{l9bUׅ.GWKcs`9)K1SJD4m%B/Iy[%kdv67뻠~sn[hӾd0S-3fwmCdFuL)2S!No2;p,Σ{X2(RY4(rkؠ5u[n>Zqz\89*{X##3饮p;=ipfUɼi[䮶EҺw:WY^={upuw>Hj8o ew Dȹjfx79leS2\Mefg9U@$91Oؕs:#;V?R{Әs{9/Ԇh~Vzuh3+[=RcSJ/JRQƥaBYi!azmQE;TVE~ f8˶Ҹ/WOGbҸS)5qї^B ֫' CR\=6JJ/JAY)o. 9mImHw~V*(JE_iV*(JE_*}V*uⶉJ ux TD^A{CjRSLd(-ExEˍfIԦ s%H3b7hr&% $MVRladh7Ddyi ?K y{UQ][)aDhɾ3fSo%ҩNU6UW|,?Ū֕:F:5b@9Ƚ8q]9JR?֤:>}rL̦@L$gLO3s8GUayNW :_9_T-jȗ{_.%UOQoaf )#}+U$\W??[)> SƧM~Wީ,VkpXYfoֳ&2D|za΀njHV߭33BF2T{ݮ?+tl&U(1g` ?-説}CPfW5ӑ>Z BQ . ۫@k;/:^ui*<3O|tbk}Pd}nz`"{ 9tdBYr=@?-a8>jy'9.}=jlay?E@gtwi W|z{M빹E?B ' 1ޤ'~t]ͨǭ-;m5({+uAW',v!Lu {K=X+J) %?.}ם*A*aoae~v}BC6?BȎj_B;Jw^[wW|C`שeI(L}*Nte(X(rOxɋC ?A@(*,:Wnlg-fr~Fi&|їשٛwk-һua!D[ٔ48Dgnm11hι0Toޭ}CwB^l*Z8oFzscȁdes*doV.ͭ{2ݭ&7s9=v=yZWo ݏ^&7o8ЌگzgM;OUy٢0{). m>Mo׷-@֬ԊX'>O>5Q Gmd4WP.*`%3($|GŁ6/㴞m..ڳvbi?+ņNY,\Sk Khm9.D.Č1Ercm5D-I{v)jq0/1zvLtэdԿ n+TuJ:TԩL(@ކ<.apQ'fXj@Ǯ1qh'^:1hj~~ݼ>j:e>* } J2G E{'vŜ_-VJpd㻰GCCJI]P6Lg#9N'-`>vt]L~5qI+8r2k,'S6˧>`M>xNt<}UrGc^p{{ ?J5;/[)J_.~ƧE)h;._YZkN"}kD?bhuڠJ0v6J Sr( apbZq~hAP>삚!"SxyZ NNblvb`:ѤI Bj́!'rA9p$Hʹ79<3d!щjl( CGq ۣ :{Z,c\&$X*d[U4O@$eR3(hs-)Eijb%(KC O" *h1e ebD59eƃD1pbxgֺ=Z#ї:" iB2E$3cL Lr`Z 3.rIX'Rm(̀gZ@чRh㺛e9`8ɲ. 9`,z6X_OƉ.3?AkF`z; `$8`DfpWGz$IpBfQ /$H+#IyBatu윐$>Vv)*zl(;`Tѫ#Ӑ-CM$!h7 \a#Wvد z y?df4\r{vGM!8Yq>#ncjda VfkVNxJǞo}s4ju6;Y\81"Fw)AG@ T!Bu<`53="Fphw9<8 D([5|TؒhQgݕhqKrKr4Ӗ$UMZpOs')H ＀CU(qzà |U, XN xr .>cHTl".˻ޒcڵ9ڵ']?%u6VIH*9 *k[%ћ* :R鄨PAN>2 ΐ.\LK_MX)qGmՠƟLQ$NTgS)0Tnę;=Ղ/ߙUyEaenNw\rT:~Qjٟua A R _@NJha(+ pW捈P'NP6iY>vufg>κЎ}͌I'n ~GXS=aQ4Z9%C&M*J 1Ol9Rd-7"3l56Z˵#<儒2D;ts&!:B+MQ ?#LoB-C%Q Yǵ%Y7D%4$̀u6*9&)'dR- !òFeU1ۈ,d>XlD6<ƑgK}ɻH;y7P\ݹӌF9 0y#3sCSfTzeA<[**s( $ɍ-ӚH#y-F,1b1r-F k ׷'[bsB@R9a? cdq?E;?hFo֟gfkUL| '͖y)A’ZH?ωn{~:r=e%O細f p737?o p{9eZ ͫ!¬ chf qw͍Hn_oݮ}s/I%o"ArHrr[ Mђ Eڭڝ9dAt?zxΆ L9Sf1LigQ ,àszC8_Fv/822<OK;p!"k7@#!373ګ@cO5n}fW`Oº;Cn?Q K-IJlR+OtcjJ9ԣ%tDqtj{gkq&u]66?ٹ<}yU1)_&|>8Yk}t1x:_gS,!Zʱq_ Txb>/nE#-ަ5,Z!bf ={$ ;kEMnPA5s>ZֈjNamRS"~ۂ:e^?Z#X"D\$E,gPOKmQYݮO[W!eVb*]jSThȍaL"+`&)$|GqWc~V[ŮU^ùwG%6Fc-W3nu[(jv`>,sX,>_ݹ5U#>f| &<O}YmQLx`G^lGs"X3pvu{⢗ q- SWvɗ(*vFb!+(II(RvFyQH0ۄXno)b p+ap!NPa(%qJwI}ٔZi:QXJuCtԗMI8Qϙ AlHmQ\\>}i!Q=Aj O(=jУTqR0R}BQ4\|lztʳZr3d?~^On-eCXL {|7zsaI9IvloZM}LR>I6vlmdFemD<]!דc0\=uekȫ$HcFfP3iE'SL1)tR&Θ1.:{x~6'h2іosFI5P[過DZ*0z"!,DO U>W2s!z[#v*F BFCp_nz%"I](%aI\Xds] Qܦ:4FƟ瞢;(@0IZmk+JA ]Ed`T01SJ2Su4Ϝ<{E)*BiyO(J8 5pFCt ݙ6H"}t AG>FHlOd'<*0 E1n!jZ׌:ām:kE/;k9m8j;-F #;odf|)-fʍjL=̺ވPȵuK;Ņ7 [VZV+kA~͵{ZӃ[q4jgQ"Cڻ6v*E]:ײɇ>֍ӷSpu{>lC^m )GW#Qp4KLv.7*ݥ&#EKtHgQ"=B.R D5Xp[~ f)~=~_!oQ\bwP)8wAt"ʻ,:6UõΉO, n= C4 SF>|L[B6퐻Y?ܷwOn} C;ܷk'㸲j_oh>7GS/+Ei-OSE̘r??xR y@y Wz6cw y^[nPԻJtGQw+=}*: 8gAd9O` jlJ:Q7JaE@a+ɠ:Q*xJ\ة%qT0JQ֛Qtԗ Pz(URlMvK}Ljb'3J9ϕuh tԗ 7JAD(P " PDFs( cv="= Ci)5+N(=Fj8jPz(m#>rќ9J C)7e ]GRnPZJV|yBQ4\5Pl4$Oqq4T_V(GҰS%'5J CyAJ83J9 fSO-t6*rΧo9fӻmᬵIҜ|y'^esyM\;n&nFǶq|̢h̲1mk`C`7ɮ6MR6IRRz4i}u#CTo^[T5@B?&}‰zw*gvWަj4wgF];lq @T-C3*>`tM `@Cp3!/4M)&N$I4pd-xK[5rʐmNv7R ٝ7Tɂ_0 6-% gE,Zz8lr bdf#[ SLn5e#}wo>M#l6wsSЛ4pƌ3(E/!lܗ@ Za4i4?Pȴ5̊,L1wY+Sa(\o%o/d#vP,rG{Lcb ,"8%f32 2ܽY&@\ft qjY)dϷ׮j@9lnGX[U Rkvu*][,_?NԊ/,Z=ohG_;)bzWup-Y^zWjpl5SoV=٫>p&noúZzo:6"Wh2ߛ28mzzbVv esdu'UEt-xcew!-nm ٴhAp-c2q*p 8fJ:"Ð Z/a#|In"wK}L.Ts<<tG#iğ}h7,Dd` dBeL[EHǮDH4ht()(Թ-7)8!ssRmi%5] Ģ>*!m `H\9\s#m!V`nQGHSѱ$ɜ Ȳ,7g2*r~L&B*ki "A!  @ۦ(uI4R_6& '"_ZFRD6Vsasm7pbQxO%r}ӤK.7RJ\IVP(QH5:8@7럛 LO5k}Kn|ZZ[˗[ˆN?, lQՠ5w.gݵ/F4*c0Džbϩ[a$mKV!/2C(WUB}GԅB\vn?e4b* ޥïrO[X<Ɣ=LD3X,D!+ 2Ά~so"u+b4 VӜ`@umE\vrS?P$aϥW't 6 |1h-á7"P^1sZFLTW:d2H)ȘHYỳ[kRfRnSN, Ř x5v$6Fp=J#0=#T ϋ-V?=[VAojTZn:zWަºm^Af1zigQrjL=bSuc0ho+Gxv)DLjeVuH M⤽g /OrKlUbC^ryw3c *c3J'K{7Y!#WNy^//;-mEk56}n y O|K%SmQyx J #<g]JR@1CsklAɤF~T^>m~nZ;v}uoGv$,>"Aާ uQ%=v'濏tαRn> *D/j/w=o[:#p@<:^% CkKF bi X!V=QkONQ ʟT 1NԨOv!k(\ҡve5n"Hi\(P d*̀D\|naX-h~_]_6.b<"ն$ 2 ݓhvk>7Ghqת2)3=fi՘Mm YF0Q_?b؋r} ͺyY_KsjA胙Y`eRV3Տ6y($XDi$CG).zKƏ։P-Eͮ W )m6N2/TTXl@dR6TQ\2B|*9{C(meƂkUdh?ieT @^EeZ]|j.~t͍ {aaS//RyUy+vMyĤH1i< c+ jbb]8bG WX˦vfPVV:. ?fUba Z~,V}uDHjm>ڢ=>tIO`5z~qCe) $QKlX\%JHVTب?xXuQfy,;9Jީ.>2сo?/ﳀ>G,aHggmn4~V);YZ(ܰÁw^(am94Nh Vww||Cp d1[#h@jT^MmDmJ]V6yr9lcĆ2ɂTBX$i#ILWPHL1Qv}6)"1]-#Qe&5 FjRYEbzoHdW~KbPē>c޽/^R ѣYO`)2,:תB STIW2uUՍ_nX~oO`} I 9&;K! )i4^@dƣ5LW_X,hvT4-O=h(Fb^ۄ*5ҜiyiySluv핖7899 sqE;'RD*<tIbz#ٲ8[Ik9v߱%I|]כ\V{w"rZK0QGxʾqN˫#U NC{;=vtZ:o7Qm;innhT5^j#ǫIs UZqԺ>$ᑪҬxU>=Ktѭ_^|,J)6町d%|eY~x󓻻(noDC5 G`N}Zd!1IwYz.(SΓ.cTf.2Jk(݇$-;gPS~f *,}It6 ϶UpUe9v1Vh 63*8ph} Dp 14K4%ْGXUiow7?CB|wwAW((uփsHUNsK8ܼZ+bX12 8/-=PD-աB47:e j74{4z6@68,a@8/#ѢExw^V/liכTeݟS|>hdv8,kjQΑc@c$=^^@=w˻yC(1\Un twHJMTQn e -uKB^=&Rf{Vr,JLUuS6eeda6줪Ec+T*#oT 7TB{Hڼk-IC)dl lk*W B.#MuQ1^FQT\6ȳ1{gc zwHOc6"Ȧdkؐĺ~R``+0X_ dRFhj$Mish؛}h o QT ǢY go_z//b_-_շjᇪ˲RWBjqU-d>/2ީޭ-×eǾ KҼDo B[䋋~𑝒hXO/ Ts(ŧW׾Cb}>UdȿPj}KRm!V<.#s )[ƠDM&҄bLBƠx4DH/!vSȡy^1m^{F'q s Tr|x!L!|^*2Pb!c{H!pN/]wrTv! ɞh:lTPȭ\B-y5X>6zW8JKl6FWe i)'!յzi|q!OYf uO\ma08yʤFP9ϏX(lp^CV\`sas@kF^lb%(1$]EA-HC')/,OEsr!B {AW.p*UVHB oiV.%WD>6!B $4f3(=JZ]uY)DcR)*vPhe(5RͅUUFيOY.~t͍ {!{_oˋT7~Qx~5ŗןmb am[Y`pQh]%Ӳaa Y5@Uh4¿_EEvyȆ'?{T@>78R ɤ%4T6)Ta5 Gu(fYVJ()JWfMj,c^a.|S6was/L]jQlRWB/Wnxu㖿 <^w Tn޳6n$+_P{wb29g 6Av7mHr&(()^$O3j֭UEɑ}SWS<o G94c!@ mRQ-H ag^"ۗdh,9Ըtao_$nݢG.ꇓ(RZ(^|X΂.lA3u1*(UE֧ٓ/p0{8^Ȁ|k%i7 (c`alSМŰ1TxXXd6eEnnSwW >~;b\iJ})!X4ֿLZۏKD(b[TTcf{LV;*6y15lqV$ԝ#iԽD3$3*,L>Δ_?QN;iQ&~u'sq+8Lͭ.^?=8x/T閣]syfoK}ssqHSх1UF/9ȸzz=IQx*,;/툣*c3+jM "aɴ~dZ3TdZY4+aU i(09YSk!>j%!ʍyn\w|l4mP3cʲ֨@ܖ1W۾tiPհ5ئm]Hȟ\D[ɔ&gh7*ѥS>oI-hɬS-}޺\2vy'V2-]OyHqRڐo\J#Ҧr\RJ<$3$s'B 64O5t(F:2DH((*=POĞr$(Qc=uZǓSI;ɺ-PD' 0EL<)ʲEൿʲ;CB1,+<r(?uEfe'}\eMo<%IQJe;h)鍖/2Rso:DS8g+*_?}P(g_tf=s'.}9SUWfvM6tIԗ%ϚGI]ٯ]>Hy|5bsc.΄E>/Yw1!F@Ι p*1^8E\PxA*F3C7# ҨoV߈hQV9ސ_}kwǂnx' DAuƨ* 3QTZtCp7t7o.`ttC G!7G;eg>@^cjH #ωePl/WαՋ琔8r|`xct桙ĈQ\mO OIp%E N"P$~$}bLP  mߢ>IBeΏk2hW[d 3굼Hm!PBmr +ak|p B)0ڏ*F:׸ԏGTho-{gLkF[9"u.B&=!ij4X4dB>K8JeEI߃2zm}ftWpe!oƮrl bȥke gqOSg hoygKIN$kB܆]vR:^PR3r=ޮ#а~Ժsp۰GY]P~%墣}?yhx>X0.I7V\2%Fa%PqN{ϳxن%PX1 KGa%J%-"iL ^ "O/Up)j8fLY9RT"05r6Qmb{K8TS1\fK ,%I}P,&Uk UeO &DheRؤjSJlN!ȅ Kky3 wf3+Bu)MKm/<7_'UAQn)Mm.L 7iVpG(inT.PUQz_C2rII\J|z䞗glGtG 2W+_Qsf(/dY}~ t! f(̴Gh5qn6_ Ṙ\A)[{p?11VF}:_|[v(۷^ДI\OS>\JkEBs9!b9]-[ w(# N#'ӗG3J naR_ T~8ϔFn@|xcPX ߥ&?-\'6٢-==-ZHg`2-$E?=/ fJ\i~Ї$(g"U6Dfuҧʱ4c>Y.U(STcT$}C❉(~ydV@wn07U`;7hl" k\EJ?7ND/\2tyAtxcx4g@Y=qX 4SᖡS)gJi+Uj9!n2Rs}Jo[I*F2 *sdJ͑g"u9  3!j]Ā){ `A= fEg3H. n4bJ. 62E ]%2 5Qv4z5rV֍ߏJC]3؜Xqvys3 oVT#hwaٳ;\.{vG=gwtxOwSpG(MG6C_ T9^_FTY>=փ@FeQ\MAj/>~ANT#c?̯;)-WIL:._/&&жIi畖EXÇėh tyu'ǟTԐyKP $\1MXZ;=w\ U|\35׹֝Dc_-_* Ts/Fc 0<@0[sMX^ `d(d_NJ coH.'e^+iRa`ȥɬ̀S⃩2ey.5 5pNNUQ=-x ??Ėt9TҁҤDe;eS%{&wCR֠4K:,*bLID/DWڔfD8hk4 9=YѹWSc!Zk& JÔfGJ7LҌ2 f-"mH.e+8™>GÕfD,4#\.QRр=KO8^iD',`tUQ0y} C u!'kiFkorF#ԇff|1LudiF4|uR8ܺ4㴨h' /7p׽68e^ZHu"\X ҂g9DL;Yc7oTЮ[Ĭ8oԞA ԹfDc_-_*E1OkMBfѽ+`Z4xV@wI +jNhJsmJصLJ |%QTXl_rq#-tD:yD1a

    ,\,ew3H eK>-q8pJT" I4 (s'&d8CŤ/:C XjT%-5qr7_329B]bã1@8>͍w|?N,%@GUbnIi +J_.C$ZHDFx&$KɅ̩l-O][o9+_٥e< ` &^mmdIvOQ#9t+QA"[$*n,ИO9zt7OzQcIk4f釒VH6mVߌf߆<*hJ3%JQTAw 0cUdzLH9@@ ?hZjΡo,$d4=&zE-nyBX_b@Ps7`VˇCRL\H-.&`!z6 hH1x"C %JVPj` IU]Xi_9@bӢni}-$ɢGuG0ͭU'Ǫb+ee*S@DLR]V6Os=^4X"j)s>])J{,JMdCы#6Q[LSF> U:vWX5y&-.L} աC/JR4'pRV=&m}ae#dM MSI(fu2*Mh.qD_ճ@TBDeK:|4 yjo hxMBBdw3Iz⤘K&, U -F+ѳu@J)PkРMO3F~{ [:;eY2\e& XJÒfط~;U2_WYYrfjKUϋӖqYYYjݻ݀q%u߱~njշ`cҼngy8w֐&FRM[6Cu](y+[.fT*D ѻ$AC6갑wӆ >2gl'¼lZYʹ229 6lzNȁ]!#gLiY@'ש5Dra#N8$8FgW|Hwh`(;-]M&wCn嗥i'cCBn/vr+vmt^w jvrÇ Ƙsz ?y24*+%J\42':$'AYJ麈,޴ Z4Sw:5*&N "]܍p'2WJ\e`WtFG*l|+xh׹$eC0bEGMS+5Pu u9AepQv@Ԅ2DC'{{:iO*e{OǷB7J%}?[ 3"Yي@LLʆpTYmvЊ=9gT_[GS @sl0HÉ(I mr"73٠#>#T/.[f/I:x>|G·A4 cNhijCY5L)uB{B񾁥lOd(m8CNqD5c:hΔhq2 )G)^veJ5 p5"fVGNRKo̦d5[]pykPiulI|a#w h9y06{ I$8_\C$99ڏ&-3%]۳[,ncwcfCv{L50$Zc EZ}b9(]j~!k-Wc.$$.OzU <cpv})31k8M3((= t.SIqA=˶/QH:~]\Mnh_!г؅UVg^ BڟhxJ$*iXr\jv=@vB'7Ҝٸ\O}fjo볠m4Z\GRt[|5IEW l88c0,*t`=+ev:>%:#cA:V]Wa{TXN.?z0A0  H8̾,|V&Ca)r6a ^ iu])S}Tz5S 2F> Xd *Ѣ+Vi~3t_ZRw(~;( FC/M&@0%bt1'b)a}_uƣ. `C^1Odp@(RIeoD9,oߗ}#mq~3-f<+$cGe~T>q"2C ٲX]ꁚWa[^=ED#/nڵMɒ7eZ̈Jiv(KcyGmڮ/_[,F"Oe#|:[v,sƩ!9]B{YPLg7cPoRfgy>>[{WWZ N@t@ޤ0 8 o˺e)Ŧ)|}󧫛~iE$W򇗛 ?׫zc|}c^%DRVAh(THF)Fʏ)S<1# +~B :׽Zn}oVڣWjPKb_ޙ͌]4{_]<4;߶u·8+BiPs/hz}3&/[F>c ۗnzSg/;˅RA~5v X`6Exq+u(,U(/6kw_qɽjm(]:~Tv"A#`jђblc&feႏN2-Atk``w \Z}uޅиȾo~0)XSy4y$7~2b(*Gbz78_izY +F=^1k +kٿJ?}r4Y}?V2 9Pry$}$˶ǭ8?9Z3w$!ow/ݯ;7T镣 gm[fiJ8%ynG!vQB4K/jшp{D&MmX`țG.+l4)k ,H:qpߡ%<P!9rWm(.^TymlBrM:M0OA WByAUR/MpBΦssgq,Ш%E(5&*q l񁨑֏^j3B~iZHFmϧ7\\ p̠E{iL^}kT. hPSD(A/P -V \뉯}`5$h/T* DL< ' "i2F|k)0! V yߣdA@Ss]w2< 1 }s*:w'7ד|_}SO)ʁ2IFk+'A;NRkK[_:ݗ-r" xci`il]L}PH)oh |(B3^CsDWG}?7ӠB ՒB.C[PLU/LQ72(> ,*2Qp>pMCoTC=_b2Pr鴃cbSd~7JgҭpeG 5orD3WE :֢V|eE:#xMV3ej9f{ak I|7O,#Ш-pĔxq'#u-(Fuw!WwjAKcص@ƀ>{Q3 z|I~-Q:m87Vb%]E{s|O.3tnDOUbIޟ2,j; ٠`[V'SJ ɔ05fxr7B{+jVM[UvDal!&)y)ZRl@<3i~L j-#O1ST+_*%uEk8%3aGǗN ?lwJ.j15=CGi82oB3VaӒ߃Is|`|s>f"hBӄCP4f)?O`δEVڤnc<QS5DjeLVQ#MG* Q4)`AP)d)Sf̤%$d[[SۻG[`w3| rsT: #ϣy/DJK9EA8cP4 t?DMKSIj*D, F)giy eE_!jGݪ\i]y] 9L,߸bF4ʍ.xd֮Du{גJD-{/Vp4A"ݦ /qZdw$@k5^ޮ!:_צ&]2$`/M$AԒ,o3.6N$%5&+\4Qe{GIA\ii O(3i4dOAlY]"ѤbVPj1qيP`^fTtD(4v'XK\TN))?m.q㒎ˤŗOڬ};\dhհ낋u"B&jF<+)/OMe8^\-ҩ:f4Җ4dP12HFg^MphU\8} 2KIP"^#CԎ9lrYA8aFY:]5v(0htMӂAxWU(p( RAv*umv9&&i=,JzJQ j~(묱(,n߃uֶVR|M"0w7K7N{Hw(|5>>݋wێg49ZO>{G?8Nρv(M*qcW7+F{4'at;&=$V}x(<) 60~һtyaԩRHHtd%՚ w=CC\j9*xeiZΘc'Ns ^)?ٙ,FRh<կB4tHݟ t1HP!&V%NDx\L8/WiHW,BҢY+0< tii3 l` S5ӊcTgOTxD`m,`uBBy9qUP%[+2(R\Cim]qgE V͔VL YW^2j)z{r;yѻjQU@Uw? ŝʫ\$Fxٚ$ߜ4PsO4;ˣ=XS%PX-_? 茲H[Otʠ55$][PV-8dw62mPƟyϻ@Ч*)!ZBdX%R[۹TUwOݥ4E֌J8ʄ <nFد;/Ioh3t*oJ_ϻt#!Nвݼr]<gkv*fСΊ5_w?CM(.U e.:> JZ 2+3@^iӄz5ۗd6A34PJX3AQ1ݔigoçɚ-*b⧻qMM S"=fk*)^}UO{?_ޝ ןtĿ;d#jiŖ4-=:)-Ds>TL( J#m?˾?wV^ulvh %YZ.nT\ŒVi,nav&!\9%rj销.SlvV(^AR96ۡ;6};@*ck  rnȱ  is wkܴz&%Z,GBa<)ӟD7 ?\/Mhdtg^AC?=meDjVhe-_~rA0h @:N^Bkv|4j ijv֏M}ɉN xXC)a)\ =JVɦia@N)k[M?ܡ5l$B$YHp T<4"JETn2r<9Q1hKf5֕RBUeM'56*f!)";DsL!j!DVDP ZwR0~ec5NTV7os1;(WZrj\f50')ZRUƤ,c/?|>,&#VxMSVF=w, -p5 A0z#SjtXӔgSZ"&SԪN|eW 0;U>PDnlć22G'YmowtԛBj<"wJ2nB-:=Pw =K5t 2R6GLk]64 UX1O׸`xoxo>5]KFզ^Zvdo$f| 3y/OƗTDYј~dE+Ȋ4Y1V +1Jgܔۈ ) j2v+]Z K֌d4{h)jpKG"MW H,x4N2n]Ӻtصh#BYm0eXeЪ4 5k Fc8"y/9ᒽ8%Ҋ7 a70F!<&bI!POЂgȥEA<<W,p isu@GSM)_.PR&1x+ln̟j`}"2LC&1{;r%Z$cℤZIZ sF*vWʮKq0$QcLW•'66+Pe(k|Zy@%0B!c-2Ak#:Ԅݟjη`+!T^nw]橠YxKʶX@jAj+qOe_u$ 4;T0z"e 먗I(3J[X)*:dPW_-:纷K JgSŊY#{-sX1E5̥D.ɤIVcԚbӑT\ReH:iY$8+1 ^xs`{n6(/Jb3=g-5Ud* V(Y |˃4vE$ʴ{o𭾗քSc}ݺINu˃թ*p^ZiKkۺ&4պ5!GutwA}GvdGڄZ&4UtwRI}XvW{oC1]8ޟxaw bW ޑPSJSY`iS bY:3ڼ@#5H\KCB.>]03,F!dta7cC}O7?l[0٧BVN,Y:EJ8-VBɄŹ3[`DF<`#$Htqw]Yՠ|]yBgB/2ʳkGyts^A$3b@+ib:v^o8xP `` w YopD)Oo[4%g3)$vO`_ I-nHH[C҆;VUɰxCSFj)X Vh!*uǃn;M b, y{dpuzuaXF$rO7j|ߎ3Bh+#1SEI33kz6"fuy B$sf*FEȳ, &+y aꏙ4BSlɀ4%BdVs(M+#A9#0!nY2`zU.;b  J~-U.Lf||iY*V(eAULf MU>HE$14cڑMOKaԴYT唇~|b wGglRDTr鑨0Ryr5᧗zF>JThp{$ 0 tÈD Wf˟k6_ջWa:2ʜYrɕ!9K _apr1NO U} əVF^;]?~hGV8[ 0QNB~;槂MM &kZ$NZ2`C˗,{L42p!Ě\Essll|0UmfU"QbUuiUUxh|ث]+2 *|7ޔmFwX^T|mD@d0f#\k4*@|4*w&/ Iݭz\qk8 ;\9h!pƸ幖9 YnyW#7oJh}2b h ,Z2+n #Vb#<\mX|U*Wi}xlRjf8.? zS0ur8o'>PwZd>>lχSvbu<6GXӥ,^v79j#<,pr[7K{)ƽgIq⽮on[@҈6O5(|iB"8̘#04x9OJf\`7P㴕 x43B3T z>QSRy5`5`# >WҴ]Q A^݊筥i)'}M%%[ѕE߾{W;Ѧz'E|}{ȍo˱"'nb΍?p"̥i350(y3 tv6@U7&=ןìg9A R!ahػ@)=A$%Cd-yYO$ "ٝ "1=]- "x)L% {*9캨U Ya3p3#ϓR ܵkYۃ\9kI׸ѧ-.{ *HDD_:TEZ%Ek=Pƶl&T> vt`C x; ?+s?3M4_.{˯~~^o^ ovt W֐.MOp7AfQ9<)unܼM]<ڂC cb֭Dblzc0$w\iԒ# A-FkΙ0ZwlQ\)*u{WR]{>bk $6=ǝiY H5C6]P ͨ-:&-e$+ڙGY9!\eԵkv;CoW/%lM{a>0bj^>-v{q܋|dB+? 'iwws;,L]ymw,vky$,>߄#p1[,u㻑 |EE 2_u,W+WiܢOf%QI4UNqu㚉.XgfkgA 2"Dd8瘊w70ۦgp̎)Sp:Vπ8hۙ ׊]WBwbnUy +<ڰ.ϹڝɛxZ 7q$đQR/Ǫ `0s'#w-q#9m1NCm v&s[z&+v9/vDzJ%a+VK,#b}4q`_.MdNYfȲCg@*z9Znn.^U?vvʉ_ *6@Zj"ԫ5e;emKyvlƱ/^%=>D)fNk(fcRR{YY3gNdΜH3%1EQ#E3lppQ$~8L1( FG¦yNiR#3Jܥ)F+/*W"ms=o'b]uۈ [3:aqܪ.5 1 Y3nRTގA{ paʘѯ"wFAI3 8EVpƔh뜓D i#8Uꍓe~}0OsiI:!mdHr8yzw̽gKKݽ@1&QcLA9]+=Cǥ^եRχާ+KUǁR> j6Jy;1ε%PZJ 3J'Rì-+PPj'R*)5Xr~T!QRj TJeY+Jᡴ 3F<*tVsGśx+...W+&®\*;.ϛB^c΋kB }򨶷㾾r&xݟ]d s"=y$l46VC$ˤ`BЇ{_)od]id A1KP>yKd jh,~hض\ TCzE /B*R=Vy0J9=›*E0hCCxU˝:|)6uOશhd8w6l".F)VzoA{ \Oޮ7M*įݺ8^d&)_8'&V)x8mA5V)h-Y&zRYE ۟YZwMfgO䱚1MB+*DlzejmlCX#Ttk9^ccdjI u/u_F4uʎ0A6)zO*5$?>魿$3K[u\tK@$y*\f3[p|¦BB4d"Ou "(CtqMU.eaGw" V9X75}SRVtâg$B&AL]ItjԂF];@?R-x(Q] şJIpvD*Q2ɵCD gW|G \<:J>ۚ8X JYQ8dQ.N١z~So0RL%} <o$8qq :+t4b3k4ͷHg'4E*@ZU9<7~VpQx臸ܗܜ໿$dmK*?>loF.)+\~jKQYԷ-dW|(&kxf!`Q݅MПO"J})zïO.+A;=9n[ PdLI;sK ZШiu]^ҽ9i=BHvh|O)( ڝ%3mEkݿ~ 9U\voN.dNRfc0VCAGa B5u#+וGrAp\U]j|plʹ1QZzQ(33$? [[7H^=<15{iEl?u 05i3 y$įm %h|$D]olMQV'Sy;#T-eFGF;pŖ܉5Smɝ8"wzxt'Nͬe@ Gu4ao݊GE%xjA[֯݊uH-H̲ ZcTc"Wuʱ6X9,XL fsvZS<>nP>v[XJ>H*^ iYF^j#šXm PBvvO.mQ\ZZjnڑV!{3 .M]$S _1{t LI9z'QiBuN3DV_dS)^0Ɍn] C4S a+f-!g4nCfN([t ?Ōn] C=5]|az>q-_k`VAEE)8^h/ƙtզx(6@J7R{gNY}㈥Y_HmƴQ J(dڣϦO5biq]Q48..W3J'R4 14281ms (08:30:14.608) Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[1190950679]: [14.281938403s] [14.281938403s] END Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.608194 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.610164 4758 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.610243 4758 trace.go:236] Trace[79865880]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:30:02.971) (total time: 11638ms): Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[79865880]: ---"Objects listed" error: 11638ms (08:30:14.610) Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[79865880]: [11.638615775s] [11.638615775s] END Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.610276 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.611890 4758 trace.go:236] Trace[2019750266]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 08:29:59.615) (total time: 14996ms): Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[2019750266]: ---"Objects listed" error: 14996ms (08:30:14.611) Jan 30 08:30:14 crc kubenswrapper[4758]: Trace[2019750266]: [14.996735875s] [14.996735875s] END Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.611937 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.613463 4758 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.626415 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53038->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.626506 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53038->192.168.126.11:17697: read: connection reset by peer" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.626944 4758 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.626989 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.651393 4758 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.678733 4758 apiserver.go:52] "Watching apiserver" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.682800 4758 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.683257 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.683919 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.684210 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.684204 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.684250 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.684514 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.684971 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.685117 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.685250 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.685884 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.687777 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.688000 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.688476 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.689969 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.690381 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.690736 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.690796 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.691692 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.692235 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.697518 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:55:26.624337557 +0000 UTC Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.710733 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.710790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.710828 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.711762 4758 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.718908 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.721992 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.736457 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.743533 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.751648 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.764104 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.778271 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.784451 4758 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.797081 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811226 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811316 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811767 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811884 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.811964 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812079 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812167 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812277 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812355 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812439 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812543 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812637 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812279 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812764 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812291 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812447 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812722 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812832 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812852 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812869 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812888 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812923 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812940 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812956 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812971 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813005 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813023 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812487 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812653 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.812690 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813051 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813070 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813127 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813142 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813162 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813161 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813179 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813207 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813249 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813280 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813310 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813303 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813349 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813377 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813400 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813424 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813447 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813477 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813522 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813571 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813582 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813595 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813621 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813647 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813695 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813720 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813745 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813791 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813814 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813837 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813862 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813886 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813911 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813933 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813994 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814023 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814091 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814114 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814183 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814205 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814229 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814249 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814271 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814293 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814313 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814334 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814358 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814380 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814399 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814423 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814443 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814465 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814486 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814507 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814531 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814577 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814600 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814623 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814644 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814666 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814688 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814710 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814733 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814778 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814798 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814820 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814842 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814862 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814884 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814907 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814930 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814954 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814979 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815001 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815026 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815090 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815113 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815136 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815158 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815180 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815202 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815225 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815246 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815269 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815291 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815313 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815335 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815360 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815382 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815405 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815429 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815453 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815477 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815507 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815532 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815554 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815577 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815598 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815622 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815648 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815672 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815694 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815743 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815766 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815794 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815816 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815838 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815862 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815883 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815906 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815930 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815952 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815974 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815997 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816020 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816083 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816106 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816153 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816238 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816262 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816287 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816313 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816336 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816360 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816384 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813668 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813706 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813829 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813843 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.813980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814002 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814152 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814192 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814246 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814293 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814546 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814617 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814683 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814688 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814742 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814885 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.814935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815120 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.815875 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816200 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816276 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816627 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816402 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816831 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816996 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818018 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818437 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818769 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818785 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.818892 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.819496 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.819651 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.819992 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.820227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.820613 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821012 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821482 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821212 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821659 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821906 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821938 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.821970 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.822072 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.822443 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.822471 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.822851 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.823322 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825217 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825563 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825664 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825816 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.825994 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.826000 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.826297 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827076 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827419 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827523 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827532 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828113 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827614 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827687 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.827990 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828353 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828415 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828549 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828627 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828694 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828765 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.828994 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829278 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829386 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829701 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829828 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.829992 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830164 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830421 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830630 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830555 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830824 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.830916 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831088 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831107 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831288 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831369 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831499 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831675 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.831894 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.832023 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.832476 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.832666 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.832867 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833075 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.833159 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.333140272 +0000 UTC m=+20.305451823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833384 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833398 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833597 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833683 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833767 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.816409 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833937 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.833997 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834024 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834080 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834117 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834151 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834371 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834402 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834424 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834464 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834516 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834536 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834573 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834616 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834717 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834743 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834782 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834800 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834847 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834883 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834900 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834925 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.834963 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835015 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835049 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835069 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835113 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835213 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835254 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835274 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835274 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835297 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835317 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.835337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.836551 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.836628 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.836927 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.836971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837204 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837226 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837301 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837466 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837588 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.837762 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.838438 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.838498 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.839807 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.839867 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.839934 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.839976 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.840296 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.840475 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.840499 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842312 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842352 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842383 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.843430 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842567 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842720 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.842197 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.843084 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.843337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.843588 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.844616 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.844696 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.844718 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.845346 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.845555 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.845552 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.845919 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846060 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846240 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846414 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846616 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846721 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846776 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846819 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.846941 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847135 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847396 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847696 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847779 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847816 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847852 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847875 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847928 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847967 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.847996 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848020 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848120 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848153 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848181 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848203 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848256 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848280 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848376 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848801 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848835 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848859 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.848895 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849394 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849422 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849436 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849448 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849463 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849475 4758 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849487 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849507 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849518 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849529 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849540 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849552 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849563 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849573 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849583 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849624 4758 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849635 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849646 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849660 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849671 4758 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849683 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849693 4758 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849707 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849717 4758 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849728 4758 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849738 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849750 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849760 4758 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849769 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849778 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849790 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849801 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849811 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849824 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849834 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849844 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849854 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849867 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849876 4758 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849886 4758 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849896 4758 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849908 4758 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849917 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849926 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849937 4758 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849947 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849955 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849979 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849991 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850002 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850011 4758 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850109 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850126 4758 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850136 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850146 4758 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850187 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850200 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850223 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850233 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850246 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850255 4758 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850264 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850274 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850287 4758 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850298 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850308 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850332 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850344 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850370 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850378 4758 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850387 4758 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850399 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850408 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850417 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850428 4758 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850437 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850446 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850455 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850467 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850480 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850488 4758 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850497 4758 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850510 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850518 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850528 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850552 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850562 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850596 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850605 4758 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850616 4758 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850626 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850635 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850646 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850659 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850669 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850689 4758 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850711 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850723 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850734 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850744 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850757 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850769 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850778 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850786 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850814 4758 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850823 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850833 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850850 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850863 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850872 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850880 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850893 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850903 4758 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850911 4758 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850919 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850930 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850939 4758 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850947 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850967 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850979 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850998 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851007 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851015 4758 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851378 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851408 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851421 4758 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849666 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851632 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851779 4758 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851804 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851820 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.849835 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.850056 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851158 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851843 4758 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.851978 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852001 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852024 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852074 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852089 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852102 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852119 4758 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852133 4758 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852150 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852167 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852180 4758 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852194 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852206 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852219 4758 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852230 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852243 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852289 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852313 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852325 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852339 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852356 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.852364 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852376 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852389 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852400 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852436 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.852468 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.352424239 +0000 UTC m=+20.324735790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852501 4758 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852518 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852540 4758 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852556 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852570 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852587 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852601 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852614 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852630 4758 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852647 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852660 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852682 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852696 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852729 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.852826 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.853600 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.854616 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.854682 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.354663981 +0000 UTC m=+20.326975532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.856397 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.856634 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.861910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.863545 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.864015 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.864879 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.864909 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.865405 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.865428 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.866269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.866474 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.866510 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.866522 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.866538 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.866556 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.866588 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.366571868 +0000 UTC m=+20.338883419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.866948 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.867414 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.868290 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.868532 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.867628 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.870030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.870331 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.870499 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.873940 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.873968 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.873985 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:14 crc kubenswrapper[4758]: E0130 08:30:14.874063 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:15.374022351 +0000 UTC m=+20.346334082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.874817 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.880081 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.881506 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f" exitCode=255 Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.881546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f"} Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.881893 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.896787 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.896940 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.900653 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.901434 4758 scope.go:117] "RemoveContainer" containerID="0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.901973 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.908506 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.917985 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.927921 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.940663 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.947972 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953562 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953644 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953647 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953658 4758 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953726 4758 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953738 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953751 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953760 4758 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953770 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953780 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953790 4758 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953800 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953811 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953821 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953832 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953841 4758 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953850 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953860 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953868 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953878 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953887 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953896 4758 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953905 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953915 4758 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:14 crc kubenswrapper[4758]: I0130 08:30:14.953925 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.010247 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.031980 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.041779 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.092619 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-68b4e7453a969bba0a383cfd9165dca79b45560c81537ffa3eedf0cda0e956c7 WatchSource:0}: Error finding container 68b4e7453a969bba0a383cfd9165dca79b45560c81537ffa3eedf0cda0e956c7: Status 404 returned error can't find the container with id 68b4e7453a969bba0a383cfd9165dca79b45560c81537ffa3eedf0cda0e956c7 Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.252709 4758 csr.go:261] certificate signing request csr-7mjk5 is approved, waiting to be issued Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.334765 4758 csr.go:257] certificate signing request csr-7mjk5 is issued Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.357627 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.357806 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.357845 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358029 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358143 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.358116387 +0000 UTC m=+21.330427938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358703 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.358689857 +0000 UTC m=+21.331001408 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358771 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.358808 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.358800071 +0000 UTC m=+21.331111622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.459238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.459294 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459417 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459438 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459450 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459502 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.459486002 +0000 UTC m=+21.431797553 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459500 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459565 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459579 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:15 crc kubenswrapper[4758]: E0130 08:30:15.459667 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:16.459639838 +0000 UTC m=+21.431951389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.523559 4758 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523796 4758 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523850 4758 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523896 4758 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523914 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523937 4758 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524005 4758 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524060 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524017 4758 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523946 4758 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524089 4758 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.523935 4758 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: W0130 08:30:15.524184 4758 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.697773 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 05:43:01.535917286 +0000 UTC Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.772112 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.772600 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.773393 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.774032 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.774672 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.775142 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.775819 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.776359 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.777086 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.777609 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.778201 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.778952 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.779584 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.779838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.780224 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.780784 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.781304 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.781811 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.785195 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.785803 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.786406 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.787195 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.787745 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.788536 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.789164 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.789553 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.790543 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.791519 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.791988 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.792549 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.793397 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.793887 4758 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.793996 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.795811 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.796202 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.796766 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.797236 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.798837 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.799812 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.800301 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.801304 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.801905 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.802739 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.803403 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.804531 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.805663 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.806268 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.806829 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.807287 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.807871 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.808759 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.809245 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.809852 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.810408 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.810949 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.811511 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.811966 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.818494 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.830341 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.844380 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.880404 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.885623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.885667 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bf62cd754a9c3e6b128b325ee021c38fc373360c5f1227f9f8977b095ece7486"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.887685 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.888835 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.889308 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.889976 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"68b4e7453a969bba0a383cfd9165dca79b45560c81537ffa3eedf0cda0e956c7"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.891483 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.891513 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.891526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f0f5c8299bcfac4e97210a3a90338e5a831f4514e1bed329ee68866870bd2a82"} Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.909682 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.948821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:15 crc kubenswrapper[4758]: I0130 08:30:15.976066 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.023785 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.053359 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.084472 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.105180 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.108574 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8z796"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.108891 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.111718 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.111869 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.111927 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.139329 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.153715 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.185120 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.201624 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.219233 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.234349 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.255084 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.266760 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.266860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/452effb5-a499-4c47-a71d-12198ffa37c8-hosts-file\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.266973 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4ck\" (UniqueName: \"kubernetes.io/projected/452effb5-a499-4c47-a71d-12198ffa37c8-kube-api-access-7w4ck\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.289273 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.308628 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.326281 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.336409 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 08:25:15 +0000 UTC, rotation deadline is 2026-11-10 21:34:23.897824121 +0000 UTC Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.336460 4758 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6829h4m7.561366689s for next certificate rotation Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.342813 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.355541 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367677 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367806 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4ck\" (UniqueName: \"kubernetes.io/projected/452effb5-a499-4c47-a71d-12198ffa37c8-kube-api-access-7w4ck\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.367844 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.367813776 +0000 UTC m=+23.340125327 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.367903 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/452effb5-a499-4c47-a71d-12198ffa37c8-hosts-file\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.367937 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.367983 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.368006 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.367988812 +0000 UTC m=+23.340300473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.368132 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/452effb5-a499-4c47-a71d-12198ffa37c8-hosts-file\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.368193 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.368056944 +0000 UTC m=+23.340368495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.374734 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.384808 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.389320 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.389855 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4ck\" (UniqueName: \"kubernetes.io/projected/452effb5-a499-4c47-a71d-12198ffa37c8-kube-api-access-7w4ck\") pod \"node-resolver-8z796\" (UID: \"452effb5-a499-4c47-a71d-12198ffa37c8\") " pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.419677 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8z796" Jan 30 08:30:16 crc kubenswrapper[4758]: W0130 08:30:16.429424 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452effb5_a499_4c47_a71d_12198ffa37c8.slice/crio-913999e4fdd4ebe5123fb82db5787435f7d66da71dd5762eb7f1ff9f08e6552b WatchSource:0}: Error finding container 913999e4fdd4ebe5123fb82db5787435f7d66da71dd5762eb7f1ff9f08e6552b: Status 404 returned error can't find the container with id 913999e4fdd4ebe5123fb82db5787435f7d66da71dd5762eb7f1ff9f08e6552b Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.434995 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.468404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.468591 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.468764 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.468803 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.468816 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.468871 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.468856514 +0000 UTC m=+23.441168065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.469097 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.469207 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.469299 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.469423 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:18.469404381 +0000 UTC m=+23.441715932 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.504744 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2nkwx"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.505128 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d2cb9"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.505271 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.505947 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6t8nj"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.506284 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.506946 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-99ddw"] Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.507200 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.507461 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.514037 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.520922 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521115 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521128 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521301 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521336 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521785 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.521787 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522293 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522307 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522404 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522811 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.522817 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.523088 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.523232 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.523446 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.523590 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.527060 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.536613 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.560307 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.580228 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.607201 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.612375 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.643392 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.659974 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673262 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-netns\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673311 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99tv\" (UniqueName: \"kubernetes.io/projected/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-kube-api-access-m99tv\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673337 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673368 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-binary-copy\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673382 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673397 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-daemon-config\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673413 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cfcde3-10c8-4ece-a78a-9508f04a0f09-mcd-auth-proxy-config\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673527 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673546 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-bin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673574 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95cfcde3-10c8-4ece-a78a-9508f04a0f09-rootfs\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673589 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673606 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-multus\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673621 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/95cfcde3-10c8-4ece-a78a-9508f04a0f09-kube-api-access-zwcdw\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673638 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673655 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-conf-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673697 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673713 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673731 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-os-release\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673755 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673796 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-cnibin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673832 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-multus-certs\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673890 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-k8s-cni-cncf-io\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673925 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cfcde3-10c8-4ece-a78a-9508f04a0f09-proxy-tls\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673963 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.673981 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-hostroot\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674004 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-system-cni-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674026 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cnibin\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674061 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674080 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674096 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-system-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674111 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-etc-kubernetes\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674138 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674160 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674204 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674224 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-socket-dir-parent\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674239 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-kubelet\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674256 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674272 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674289 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-os-release\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674303 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-cni-binary-copy\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674321 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cznc6\" (UniqueName: \"kubernetes.io/projected/fac75e9c-fc94-4c83-8613-bce0f4744079-kube-api-access-cznc6\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674336 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.674351 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.676450 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.697407 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.698241 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:02:47.129309217 +0000 UTC Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.698569 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.708997 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.723730 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.737232 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.752149 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.767889 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.767915 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.767953 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.768029 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.768153 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:16 crc kubenswrapper[4758]: E0130 08:30:16.768216 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.770109 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cznc6\" (UniqueName: \"kubernetes.io/projected/fac75e9c-fc94-4c83-8613-bce0f4744079-kube-api-access-cznc6\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774818 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774838 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774855 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-os-release\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774872 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-cni-binary-copy\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774887 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774906 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774923 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774939 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-netns\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m99tv\" (UniqueName: \"kubernetes.io/projected/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-kube-api-access-m99tv\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774983 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.774999 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-binary-copy\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775015 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775037 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-daemon-config\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775075 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cfcde3-10c8-4ece-a78a-9508f04a0f09-mcd-auth-proxy-config\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775094 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775108 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775133 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-bin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775150 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95cfcde3-10c8-4ece-a78a-9508f04a0f09-rootfs\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775166 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775182 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-multus\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/95cfcde3-10c8-4ece-a78a-9508f04a0f09-kube-api-access-zwcdw\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775229 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775244 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-conf-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775263 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775280 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775300 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-os-release\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775321 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775335 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775400 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-cnibin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776145 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776197 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776268 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-os-release\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776309 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776332 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-netns\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.775340 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-cnibin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776882 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-k8s-cni-cncf-io\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776912 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-binary-copy\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-k8s-cni-cncf-io\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776977 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-multus-certs\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776751 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776927 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-run-multus-certs\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777093 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777134 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777150 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cfcde3-10c8-4ece-a78a-9508f04a0f09-proxy-tls\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777179 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777205 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777227 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-system-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777248 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-hostroot\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777273 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-system-cni-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777288 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-daemon-config\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777319 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cnibin\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-cnibin\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.776757 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac75e9c-fc94-4c83-8613-bce0f4744079-cni-binary-copy\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777363 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777394 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777417 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-etc-kubernetes\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777428 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cfcde3-10c8-4ece-a78a-9508f04a0f09-mcd-auth-proxy-config\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777456 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777468 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777479 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777507 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777523 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-bin\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777560 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-socket-dir-parent\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777582 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-kubelet\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777613 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777639 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-kubelet\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777661 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-os-release\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777667 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95cfcde3-10c8-4ece-a78a-9508f04a0f09-rootfs\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777696 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777712 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777760 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-conf-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777819 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777862 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777894 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-etc-kubernetes\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777911 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-system-cni-dir\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777944 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.777970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778007 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-hostroot\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778036 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778115 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778086 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-host-var-lib-cni-multus\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778171 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fac75e9c-fc94-4c83-8613-bce0f4744079-multus-socket-dir-parent\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778182 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-system-cni-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.778323 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.782488 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.785151 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cfcde3-10c8-4ece-a78a-9508f04a0f09-proxy-tls\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.796606 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") pod \"ovnkube-node-d2cb9\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.799015 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.799296 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cznc6\" (UniqueName: \"kubernetes.io/projected/fac75e9c-fc94-4c83-8613-bce0f4744079-kube-api-access-cznc6\") pod \"multus-99ddw\" (UID: \"fac75e9c-fc94-4c83-8613-bce0f4744079\") " pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.799625 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwcdw\" (UniqueName: \"kubernetes.io/projected/95cfcde3-10c8-4ece-a78a-9508f04a0f09-kube-api-access-zwcdw\") pod \"machine-config-daemon-2nkwx\" (UID: \"95cfcde3-10c8-4ece-a78a-9508f04a0f09\") " pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.805827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m99tv\" (UniqueName: \"kubernetes.io/projected/518ee414-95c2-4ee2-8bef-bd1af1d5afb4-kube-api-access-m99tv\") pod \"multus-additional-cni-plugins-6t8nj\" (UID: \"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\") " pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.812579 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.820126 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.827369 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.828520 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:16 crc kubenswrapper[4758]: W0130 08:30:16.832131 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95cfcde3_10c8_4ece_a78a_9508f04a0f09.slice/crio-1fd75654cade1ab92c0a4d1e8c35ac282ae9ceed0a3f510032777a69b8bb2065 WatchSource:0}: Error finding container 1fd75654cade1ab92c0a4d1e8c35ac282ae9ceed0a3f510032777a69b8bb2065: Status 404 returned error can't find the container with id 1fd75654cade1ab92c0a4d1e8c35ac282ae9ceed0a3f510032777a69b8bb2065 Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.841845 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.842006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-99ddw" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.853223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.856384 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.871993 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: W0130 08:30:16.872835 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod518ee414_95c2_4ee2_8bef_bd1af1d5afb4.slice/crio-786c58176a857b8fe83287a9c96883fcbb01cd330f36816681653b737bcdd5b8 WatchSource:0}: Error finding container 786c58176a857b8fe83287a9c96883fcbb01cd330f36816681653b737bcdd5b8: Status 404 returned error can't find the container with id 786c58176a857b8fe83287a9c96883fcbb01cd330f36816681653b737bcdd5b8 Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.896198 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"8d70505dbacf380ad755907b0497b938a8a8916ec3d2072e37bb1856843d9c78"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.897212 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"1fd75654cade1ab92c0a4d1e8c35ac282ae9ceed0a3f510032777a69b8bb2065"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.898257 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.899206 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.901179 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8z796" event={"ID":"452effb5-a499-4c47-a71d-12198ffa37c8","Type":"ContainerStarted","Data":"658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.901219 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8z796" event={"ID":"452effb5-a499-4c47-a71d-12198ffa37c8","Type":"ContainerStarted","Data":"913999e4fdd4ebe5123fb82db5787435f7d66da71dd5762eb7f1ff9f08e6552b"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.902929 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.905661 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"63f3857ee1c13d78c65503cac29c070d58c87da4f89329d3e96e0c89b8e387d4"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.915385 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerStarted","Data":"786c58176a857b8fe83287a9c96883fcbb01cd330f36816681653b737bcdd5b8"} Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.919624 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.948628 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.958393 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.965513 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.969134 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 08:30:16 crc kubenswrapper[4758]: I0130 08:30:16.991184 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.007329 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.024459 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.042171 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.062094 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.082628 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.097389 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.115887 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.135244 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.152645 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.172127 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.193427 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.570978 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.585447 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.586238 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.588493 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.601112 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.615715 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.626828 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.644960 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.662068 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.679410 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.697492 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.698416 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:50:59.025007573 +0000 UTC Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.710834 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.723997 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.741269 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.756486 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.779402 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.799482 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.820509 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.838189 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.849486 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.861279 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.872371 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.883923 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.900546 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.910341 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.910395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.911657 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.912810 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.914135 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872" exitCode=0 Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.914390 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.915210 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.916018 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac" exitCode=0 Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.916057 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac"} Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.938188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:17 crc kubenswrapper[4758]: I0130 08:30:17.967616 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.000022 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.039159 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.137405 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.155761 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.184533 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.203429 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.248092 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.292839 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.320194 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.361905 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.399480 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.399944 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400121 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.400091493 +0000 UTC m=+27.372403044 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.400222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.400297 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400388 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400446 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400537 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.400473205 +0000 UTC m=+27.372784746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.400608 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.400598579 +0000 UTC m=+27.372910130 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.443859 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.480973 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.501402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.501611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501612 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501757 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501805 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501819 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501900 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.501882674 +0000 UTC m=+27.474194285 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.501764 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.502067 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.502162 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:22.502148083 +0000 UTC m=+27.474459634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.517259 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.699117 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:30:56.282773384 +0000 UTC Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.768880 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.769072 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.769719 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.769910 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.769729 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:18 crc kubenswrapper[4758]: E0130 08:30:18.770221 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.922449 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerStarted","Data":"7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.927331 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.927436 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.927452 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.927465 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda"} Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.938557 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.953900 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.972342 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:18 crc kubenswrapper[4758]: I0130 08:30:18.987089 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:18Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.005435 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.019223 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.034689 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.061911 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.070106 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.074064 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.095152 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.116052 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.133315 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.150579 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.310130 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lnh2g"] Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.310727 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.313977 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.314748 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.316245 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.318590 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.328028 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.343118 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.358131 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.373966 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.387873 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.406958 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.415355 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28b9f864-7294-4168-8200-3dbba23ffc97-serviceca\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.415436 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb85r\" (UniqueName: \"kubernetes.io/projected/28b9f864-7294-4168-8200-3dbba23ffc97-kube-api-access-xb85r\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.415619 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28b9f864-7294-4168-8200-3dbba23ffc97-host\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.422188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.461771 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.501479 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.516919 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28b9f864-7294-4168-8200-3dbba23ffc97-serviceca\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.516992 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb85r\" (UniqueName: \"kubernetes.io/projected/28b9f864-7294-4168-8200-3dbba23ffc97-kube-api-access-xb85r\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.517031 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28b9f864-7294-4168-8200-3dbba23ffc97-host\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.517130 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28b9f864-7294-4168-8200-3dbba23ffc97-host\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.517996 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28b9f864-7294-4168-8200-3dbba23ffc97-serviceca\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.543789 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.566636 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb85r\" (UniqueName: \"kubernetes.io/projected/28b9f864-7294-4168-8200-3dbba23ffc97-kube-api-access-xb85r\") pod \"node-ca-lnh2g\" (UID: \"28b9f864-7294-4168-8200-3dbba23ffc97\") " pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.604343 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.639642 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.642736 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lnh2g" Jan 30 08:30:19 crc kubenswrapper[4758]: W0130 08:30:19.659154 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28b9f864_7294_4168_8200_3dbba23ffc97.slice/crio-15a4ef68cc35f6421138c11d11513fa3467262e648409f6415d034a367a1711c WatchSource:0}: Error finding container 15a4ef68cc35f6421138c11d11513fa3467262e648409f6415d034a367a1711c: Status 404 returned error can't find the container with id 15a4ef68cc35f6421138c11d11513fa3467262e648409f6415d034a367a1711c Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.681414 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.699922 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:11:20.089865186 +0000 UTC Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.721140 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.934926 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.935001 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.936592 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lnh2g" event={"ID":"28b9f864-7294-4168-8200-3dbba23ffc97","Type":"ContainerStarted","Data":"df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.936654 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lnh2g" event={"ID":"28b9f864-7294-4168-8200-3dbba23ffc97","Type":"ContainerStarted","Data":"15a4ef68cc35f6421138c11d11513fa3467262e648409f6415d034a367a1711c"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.938630 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a" exitCode=0 Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.938660 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a"} Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.953536 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.968017 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.981327 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:19 crc kubenswrapper[4758]: I0130 08:30:19.998999 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:19Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.013206 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.028780 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.044672 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.063529 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.066811 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.068103 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.072859 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.100814 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.145028 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.187808 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.190096 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.238986 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.277008 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.327282 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.366504 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.411670 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.443999 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.479701 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.518406 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.557759 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.600097 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.639091 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.683188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.701929 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:58:38.579392301 +0000 UTC Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.718481 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.757021 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.768239 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.768301 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:20 crc kubenswrapper[4758]: E0130 08:30:20.768398 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:20 crc kubenswrapper[4758]: E0130 08:30:20.768435 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.768304 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:20 crc kubenswrapper[4758]: E0130 08:30:20.768520 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.798951 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.845996 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.876971 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.915870 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.942304 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f" exitCode=0 Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.942491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f"} Jan 30 08:30:20 crc kubenswrapper[4758]: I0130 08:30:20.958968 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.000238 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:20Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.013856 4758 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.015491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.015535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.015544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.015658 4758 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.038083 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.091086 4758 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.091336 4758 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.092657 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.107685 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111465 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111475 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.111498 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.119904 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.122743 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.126565 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.137994 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141401 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.141426 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.154181 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157684 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157704 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157711 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157723 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.157732 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.164957 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.172538 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: E0130 08:30:21.172664 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.174520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.174708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.174862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.175030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.175189 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.198684 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.238694 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277736 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277767 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.277790 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.279865 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.326194 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.359934 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.379664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.379886 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.379950 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.380010 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.380091 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.398669 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.443345 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.482987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.483067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.483082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.483103 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.483115 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.496245 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.521672 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.557610 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586785 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.586909 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.689275 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.702128 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:46:11.43930709 +0000 UTC Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792101 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792454 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.792524 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.895287 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.950355 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39" exitCode=0 Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.950398 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.955566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26"} Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.970962 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.983522 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.998984 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:21Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:21 crc kubenswrapper[4758]: I0130 08:30:21.999258 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:21Z","lastTransitionTime":"2026-01-30T08:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.012568 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.026620 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.041022 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.052666 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.066332 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.091498 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.102491 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.113733 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.180821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.197857 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.204869 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.211658 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.232291 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.247929 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307231 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.307244 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410416 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.410634 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.450478 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.450749 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.450715623 +0000 UTC m=+35.423027184 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.451127 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.451250 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.451406 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.451362343 +0000 UTC m=+35.423673904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.451512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.451590 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.451869 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.451844678 +0000 UTC m=+35.424156269 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514395 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.514436 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.552223 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.552264 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552416 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552447 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552459 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552458 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552492 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552500 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.552488074 +0000 UTC m=+35.524799625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552510 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.552581 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.552557336 +0000 UTC m=+35.524868947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617286 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.617416 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.704302 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:31:42.272569091 +0000 UTC Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720314 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.720357 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.768022 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.768022 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.768158 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.768213 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.768465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:22 crc kubenswrapper[4758]: E0130 08:30:22.768532 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.822447 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924952 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.924974 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:22Z","lastTransitionTime":"2026-01-30T08:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.961594 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerStarted","Data":"41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792"} Jan 30 08:30:22 crc kubenswrapper[4758]: I0130 08:30:22.998546 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.017345 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028374 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.028411 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.035848 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.063702 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.077867 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.091235 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.111370 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.130471 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132190 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.132267 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.157572 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.178883 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.194571 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.209638 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.222221 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.234546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.234772 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.234927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.235076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.235197 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.236627 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.259641 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337645 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337722 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337823 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.337897 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441469 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.441487 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.544533 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.545029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.545275 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.545422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.545618 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653607 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.653620 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.704866 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:30:50.041511205 +0000 UTC Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.781400 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.884688 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.968803 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792" exitCode=0 Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.968999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.978588 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.980641 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.980737 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.987203 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:23Z","lastTransitionTime":"2026-01-30T08:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:23 crc kubenswrapper[4758]: I0130 08:30:23.992478 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:23Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.009193 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.028476 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.031677 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.043896 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.060048 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.080294 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.090304 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.104375 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.117688 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.135621 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.150219 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.164851 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.184908 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.193317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.197315 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.212745 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.232583 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.250258 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.266838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.282750 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295259 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.295359 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.306377 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.319861 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.333418 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.352259 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.370431 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.389023 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397753 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397789 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397845 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.397858 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.402795 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.413266 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.432198 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.450637 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.464953 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.500451 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.606509 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.705124 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:17:35.121926556 +0000 UTC Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.711429 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.768284 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:24 crc kubenswrapper[4758]: E0130 08:30:24.768824 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.768289 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:24 crc kubenswrapper[4758]: E0130 08:30:24.769170 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.768282 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:24 crc kubenswrapper[4758]: E0130 08:30:24.769362 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815028 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815097 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815130 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.815145 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.918355 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:24Z","lastTransitionTime":"2026-01-30T08:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.985287 4758 generic.go:334] "Generic (PLEG): container finished" podID="518ee414-95c2-4ee2-8bef-bd1af1d5afb4" containerID="a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06" exitCode=0 Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.985471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerDied","Data":"a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06"} Jan 30 08:30:24 crc kubenswrapper[4758]: I0130 08:30:24.986181 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.004518 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:24Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.017801 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.023964 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.024341 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.040930 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.059218 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.075386 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.089943 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.103290 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.114352 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.128278 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.129206 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.143263 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.163005 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.179258 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.199515 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.217826 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231229 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.231535 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.242087 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.255316 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.274723 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.286208 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.298424 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.307503 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.319698 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.333897 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.333966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.333987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.334010 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.334025 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.337434 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.351727 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.363681 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.375789 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.388317 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.400709 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.416655 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436528 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.436631 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538569 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.538605 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.641371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.641656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.641804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.642135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.642461 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.706662 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:57:02.160233454 +0000 UTC Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.745754 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.782545 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.798432 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.813438 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.837216 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.848674 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.863787 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.876663 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.893172 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.915284 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.935796 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.950453 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952692 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.952842 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:25Z","lastTransitionTime":"2026-01-30T08:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.965254 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.986184 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:25 crc kubenswrapper[4758]: I0130 08:30:25.994183 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" event={"ID":"518ee414-95c2-4ee2-8bef-bd1af1d5afb4","Type":"ContainerStarted","Data":"e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.013242 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.032997 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.044500 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055391 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.055419 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.057914 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.069539 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.085507 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.104266 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.117024 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.131471 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.143861 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157905 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.157919 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.160128 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.173919 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.188074 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.204238 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.231593 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.260851 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.260907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.260928 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.260956 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.261074 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.275508 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.301463 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.342340 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364137 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364203 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.364238 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.381307 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:26Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.466354 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.568567 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.671919 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.707383 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:32:07.403698677 +0000 UTC Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.767829 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.767848 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.768167 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:26 crc kubenswrapper[4758]: E0130 08:30:26.768314 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:26 crc kubenswrapper[4758]: E0130 08:30:26.768510 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:26 crc kubenswrapper[4758]: E0130 08:30:26.768549 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774758 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.774811 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877894 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877952 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.877963 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.979968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.979999 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.980008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.980020 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:26 crc kubenswrapper[4758]: I0130 08:30:26.980029 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:26Z","lastTransitionTime":"2026-01-30T08:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082334 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.082356 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184640 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.184657 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287257 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287319 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.287328 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389673 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.389690 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491665 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491756 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.491773 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.594350 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697775 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.697851 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.708350 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:23:53.805352423 +0000 UTC Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.800895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.800951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.800968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.800994 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.801012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904469 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904499 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.904511 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:27Z","lastTransitionTime":"2026-01-30T08:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.930073 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq"] Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.930823 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.933761 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.940244 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.952649 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:27Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.968301 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:27Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:27 crc kubenswrapper[4758]: I0130 08:30:27.991191 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:27Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.002679 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/0.log" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006693 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006743 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006755 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac" exitCode=1 Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.006803 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.007899 4758 scope.go:117] "RemoveContainer" containerID="706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.016649 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.027985 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.028079 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fvp\" (UniqueName: \"kubernetes.io/projected/0944aacc-db22-4503-990b-f5724b55d4ae-kube-api-access-96fvp\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.028632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0944aacc-db22-4503-990b-f5724b55d4ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.029710 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.037794 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.052344 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.070462 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.086495 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.099506 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.110714 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.113699 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.127894 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.130403 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.130645 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.130797 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96fvp\" (UniqueName: \"kubernetes.io/projected/0944aacc-db22-4503-990b-f5724b55d4ae-kube-api-access-96fvp\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.130969 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0944aacc-db22-4503-990b-f5724b55d4ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.131137 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.131222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0944aacc-db22-4503-990b-f5724b55d4ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.144353 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0944aacc-db22-4503-990b-f5724b55d4ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.144684 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.152249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96fvp\" (UniqueName: \"kubernetes.io/projected/0944aacc-db22-4503-990b-f5724b55d4ae-kube-api-access-96fvp\") pod \"ovnkube-control-plane-749d76644c-bx4hq\" (UID: \"0944aacc-db22-4503-990b-f5724b55d4ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.160468 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.176838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.194004 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.206902 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212660 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212723 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.212733 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.219549 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.236371 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.245003 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.254312 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 08:30:27.003894 5947 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004435 5947 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004494 5947 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004570 5947 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004775 5947 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005505 5947 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005540 5947 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 08:30:27.005679 5947 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: W0130 08:30:28.258512 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0944aacc_db22_4503_990b_f5724b55d4ae.slice/crio-207d8c285793572f58c7ac419f8097e9b188dcbbd09ce973bab562ddf0e2bae5 WatchSource:0}: Error finding container 207d8c285793572f58c7ac419f8097e9b188dcbbd09ce973bab562ddf0e2bae5: Status 404 returned error can't find the container with id 207d8c285793572f58c7ac419f8097e9b188dcbbd09ce973bab562ddf0e2bae5 Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.267115 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.281758 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.306546 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317183 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.317194 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.318637 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.329521 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.339281 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.349975 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.360892 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.378750 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.395922 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.409001 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420115 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.420157 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.425488 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.439251 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.521973 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.522007 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.522018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.522034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.522061 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.624927 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.709079 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:15:37.172109944 +0000 UTC Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727318 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.727358 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.747861 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.767952 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.768018 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:28 crc kubenswrapper[4758]: E0130 08:30:28.768122 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.768127 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:28 crc kubenswrapper[4758]: E0130 08:30:28.768237 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:28 crc kubenswrapper[4758]: E0130 08:30:28.768426 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.769208 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 08:30:27.003894 5947 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004435 5947 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004494 5947 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004570 5947 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004775 5947 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005505 5947 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005540 5947 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 08:30:27.005679 5947 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.783848 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.809272 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829956 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829972 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.829984 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.830326 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.850645 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.864532 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.889736 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.912629 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932207 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.932272 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:28Z","lastTransitionTime":"2026-01-30T08:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.952702 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.971577 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.988376 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:28 crc kubenswrapper[4758]: I0130 08:30:28.999257 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:28Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.011975 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/0.log" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.014259 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.014932 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" event={"ID":"0944aacc-db22-4503-990b-f5724b55d4ae","Type":"ContainerStarted","Data":"207d8c285793572f58c7ac419f8097e9b188dcbbd09ce973bab562ddf0e2bae5"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.017013 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:29Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.030560 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:29Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.038904 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.046981 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:29Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.058855 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:29Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140742 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.140765 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.242910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.243319 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.243798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.243871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.243934 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346599 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.346652 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449157 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.449168 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.551596 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654286 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.654323 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.709199 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:50:15.847761413 +0000 UTC Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.756916 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.756975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.756990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.757008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.757024 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859659 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859670 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.859694 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962383 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962401 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:29 crc kubenswrapper[4758]: I0130 08:30:29.962416 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:29Z","lastTransitionTime":"2026-01-30T08:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.020660 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" event={"ID":"0944aacc-db22-4503-990b-f5724b55d4ae","Type":"ContainerStarted","Data":"1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.021114 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" event={"ID":"0944aacc-db22-4503-990b-f5724b55d4ae","Type":"ContainerStarted","Data":"e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.022734 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/1.log" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.023488 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/0.log" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.027132 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" exitCode=1 Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.027180 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.027357 4758 scope.go:117] "RemoveContainer" containerID="706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.028027 4758 scope.go:117] "RemoveContainer" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.028286 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065099 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.065141 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.073863 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.095394 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.109363 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.125468 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.139647 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.151821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.165381 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gj6b4"] Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.166125 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.166269 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167429 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167766 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.167857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.168881 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.180549 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.192985 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.207525 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.218822 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.231642 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.242067 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.253699 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.253763 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zv8j\" (UniqueName: \"kubernetes.io/projected/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-kube-api-access-9zv8j\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.253811 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.266377 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270513 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.270524 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.289701 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 08:30:27.003894 5947 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004435 5947 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004494 5947 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004570 5947 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004775 5947 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005505 5947 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005540 5947 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 08:30:27.005679 5947 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.300228 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.311268 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.325154 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.338419 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.349206 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.354567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zv8j\" (UniqueName: \"kubernetes.io/projected/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-kube-api-access-9zv8j\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.354655 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.354785 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.354867 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:30.854827842 +0000 UTC m=+35.827139393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.359761 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.369930 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.371139 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zv8j\" (UniqueName: \"kubernetes.io/projected/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-kube-api-access-9zv8j\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372869 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372886 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.372898 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.386080 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.397158 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.408633 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.421106 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.446017 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://706b7f478df428fb8c2826f554a82a08aeb9c38e24b1528f98bfd1050e407fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 08:30:27.003894 5947 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004435 5947 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004494 5947 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004570 5947 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.004775 5947 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005505 5947 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 08:30:27.005540 5947 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 08:30:27.005679 5947 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.455823 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.455899 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.455920 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.455900331 +0000 UTC m=+51.428211882 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.455979 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.455990 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.456008 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.456001105 +0000 UTC m=+51.428312656 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.456066 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.456106 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.456096798 +0000 UTC m=+51.428408349 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.464512 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475673 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.475685 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.486351 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.498085 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.507197 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.520020 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:30Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.556567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.556607 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556731 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556744 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556754 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556763 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556793 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556800 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.556788405 +0000 UTC m=+51.529099956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556805 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.556854 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:46.556839816 +0000 UTC m=+51.529151367 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.578960 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.681418 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.710806 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:16:07.940673444 +0000 UTC Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.767663 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.767765 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.767673 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.767915 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.767986 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.768282 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.789403 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.859578 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.859724 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: E0130 08:30:30.859769 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:31.859756353 +0000 UTC m=+36.832067904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.891703 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:30 crc kubenswrapper[4758]: I0130 08:30:30.994833 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:30Z","lastTransitionTime":"2026-01-30T08:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.036539 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/1.log" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.039143 4758 scope.go:117] "RemoveContainer" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.039301 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.052507 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.072352 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096857 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096908 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.096964 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.097199 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.115073 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.127789 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.138483 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.155903 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.169716 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.182049 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183275 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.183306 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.196320 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.199402 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200463 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200499 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200531 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.200544 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.214369 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.215620 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.221567 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.231459 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.237520 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242253 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.242267 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.243742 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.256649 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260157 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260166 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.260188 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.272828 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.272933 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274645 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274689 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.274698 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.276969 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.288767 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.303156 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:31Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.377522 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.480928 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.481245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.481370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.481488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.481677 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583935 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583968 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.583979 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.686932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.686989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.687001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.687018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.687033 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.711503 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:28:36.293089809 +0000 UTC Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.768533 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.768659 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791640 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791813 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.791915 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.869060 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.869217 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:31 crc kubenswrapper[4758]: E0130 08:30:31.869267 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:33.869253986 +0000 UTC m=+38.841565527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.894908 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:31 crc kubenswrapper[4758]: I0130 08:30:31.997459 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:31Z","lastTransitionTime":"2026-01-30T08:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100163 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.100232 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.202973 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.203071 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.203096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.203125 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.203149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.306360 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.409954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.409982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.409990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.410001 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.410010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511753 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511767 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.511802 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615529 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.615562 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.712277 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:47:53.513045414 +0000 UTC Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717919 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.717958 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.768485 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:32 crc kubenswrapper[4758]: E0130 08:30:32.768697 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.768577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:32 crc kubenswrapper[4758]: E0130 08:30:32.768857 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.768598 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:32 crc kubenswrapper[4758]: E0130 08:30:32.769194 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.820945 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:32 crc kubenswrapper[4758]: I0130 08:30:32.922961 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:32Z","lastTransitionTime":"2026-01-30T08:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025744 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.025768 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.128435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231366 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231382 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.231396 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333486 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.333496 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.436638 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.437030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.437210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.437330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.437450 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.540950 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.541254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.541432 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.541564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.541687 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645619 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645665 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.645713 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.712869 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:21:08.609220653 +0000 UTC Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748746 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.748777 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.768668 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:33 crc kubenswrapper[4758]: E0130 08:30:33.769145 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851745 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.851815 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.892254 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:33 crc kubenswrapper[4758]: E0130 08:30:33.892579 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:33 crc kubenswrapper[4758]: E0130 08:30:33.893467 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:37.893368179 +0000 UTC m=+42.865679770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.956255 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.956811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.957086 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.957283 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:33 crc kubenswrapper[4758]: I0130 08:30:33.957535 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:33Z","lastTransitionTime":"2026-01-30T08:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061729 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.061780 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.165653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.166310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.166409 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.166489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.166547 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269761 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.269869 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372468 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.372539 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.476291 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579188 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579238 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.579257 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.681863 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.713958 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 14:21:32.618158121 +0000 UTC Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.767853 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.768001 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:34 crc kubenswrapper[4758]: E0130 08:30:34.768121 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:34 crc kubenswrapper[4758]: E0130 08:30:34.768256 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.768371 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:34 crc kubenswrapper[4758]: E0130 08:30:34.768589 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786440 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786904 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.786988 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.890867 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:34 crc kubenswrapper[4758]: I0130 08:30:34.994621 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:34Z","lastTransitionTime":"2026-01-30T08:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.097432 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199839 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.199864 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302270 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.302300 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.404843 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.405117 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.405210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.405291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.405410 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507435 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507493 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.507615 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610573 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610785 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.610974 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713730 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713799 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.713809 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.714114 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:59:16.797740051 +0000 UTC Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.768102 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:35 crc kubenswrapper[4758]: E0130 08:30:35.768255 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.785650 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.805617 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817222 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.817268 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.820101 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.831634 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.845827 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.862145 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.875127 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.889662 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.900096 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.915170 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919551 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919600 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.919610 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:35Z","lastTransitionTime":"2026-01-30T08:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.929679 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.947264 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.962063 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.976122 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:35 crc kubenswrapper[4758]: I0130 08:30:35.995878 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:35Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.010441 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.022249 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.031208 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:36Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124577 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124633 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.124668 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227768 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227795 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.227857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.331622 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.436381 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.540646 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645007 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645228 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.645411 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.715117 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:59:05.725606096 +0000 UTC Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.749670 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.767958 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.768011 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.768112 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:36 crc kubenswrapper[4758]: E0130 08:30:36.768192 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:36 crc kubenswrapper[4758]: E0130 08:30:36.768721 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:36 crc kubenswrapper[4758]: E0130 08:30:36.768795 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.854605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.855224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.855250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.855283 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.855306 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:36 crc kubenswrapper[4758]: I0130 08:30:36.960575 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:36Z","lastTransitionTime":"2026-01-30T08:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.062970 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166600 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166904 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.166950 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.271191 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.374435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483204 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483252 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483265 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.483310 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.586631 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690099 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.690145 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.716135 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:25:25.33079176 +0000 UTC Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.768678 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:37 crc kubenswrapper[4758]: E0130 08:30:37.769017 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793892 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.793912 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.897563 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:37Z","lastTransitionTime":"2026-01-30T08:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:37 crc kubenswrapper[4758]: I0130 08:30:37.945070 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:37 crc kubenswrapper[4758]: E0130 08:30:37.945385 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:37 crc kubenswrapper[4758]: E0130 08:30:37.945556 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:30:45.945517627 +0000 UTC m=+50.917829208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000838 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000956 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.000972 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104933 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.104950 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209584 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.209671 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313652 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.313669 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417112 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.417843 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522801 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.522849 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626561 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626586 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.626599 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.716962 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:11:35.583947473 +0000 UTC Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729173 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.729303 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.768607 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:38 crc kubenswrapper[4758]: E0130 08:30:38.768758 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.769013 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:38 crc kubenswrapper[4758]: E0130 08:30:38.770404 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.770548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:38 crc kubenswrapper[4758]: E0130 08:30:38.770922 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833747 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833832 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.833848 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937976 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:38 crc kubenswrapper[4758]: I0130 08:30:38.937990 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:38Z","lastTransitionTime":"2026-01-30T08:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.041385 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144206 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.144218 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247630 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247640 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.247710 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.351455 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454442 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454500 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.454512 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.556997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.557029 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.557049 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.557062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.557072 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660128 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660178 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.660211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.718110 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:41:59.1303438 +0000 UTC Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.762446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.767577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:39 crc kubenswrapper[4758]: E0130 08:30:39.767868 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866523 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.866536 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.969863 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.969940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.969960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.969991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:39 crc kubenswrapper[4758]: I0130 08:30:39.970012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:39Z","lastTransitionTime":"2026-01-30T08:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073767 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.073872 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.176951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.176998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.177012 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.177027 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.177070 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280383 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280456 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280474 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.280522 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.384987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.385105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.385122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.385151 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.385171 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.488988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.489127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.489152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.489192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.489219 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593441 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593469 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.593485 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697576 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.697601 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.718988 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:30:28.823094701 +0000 UTC Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.768431 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.768435 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:40 crc kubenswrapper[4758]: E0130 08:30:40.768633 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:40 crc kubenswrapper[4758]: E0130 08:30:40.768725 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.768455 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:40 crc kubenswrapper[4758]: E0130 08:30:40.768847 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.801732 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905429 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905605 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:40 crc kubenswrapper[4758]: I0130 08:30:40.905630 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:40Z","lastTransitionTime":"2026-01-30T08:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008740 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.008872 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113479 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.113613 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.217504 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.293411 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.319178 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.326461 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.341905 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.347942 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.348008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.348035 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.348093 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.348114 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.370077 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375475 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.375494 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.395677 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.406875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.406932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.406965 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.406995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.407014 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.432021 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:41Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.432405 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.435861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.435948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.435967 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.436023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.436098 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.539971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.540056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.540069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.540095 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.540109 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643796 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.643963 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.719287 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:08:09.292688719 +0000 UTC Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748449 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.748471 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.768568 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:41 crc kubenswrapper[4758]: E0130 08:30:41.768743 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852163 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852181 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.852193 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955463 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955513 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:41 crc kubenswrapper[4758]: I0130 08:30:41.955548 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:41Z","lastTransitionTime":"2026-01-30T08:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.058807 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.059133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.059238 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.059302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.059368 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.087146 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.100149 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.111263 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.133966 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.162797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.163190 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.178242 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.197120 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.221296 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.234071 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.246269 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.258729 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.265318 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.271079 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.283138 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.295823 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.306144 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.320547 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.336636 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.352618 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.365216 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:42Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.368278 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.471172 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.573317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675546 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675603 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.675627 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.721113 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:23:31.878844656 +0000 UTC Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.768505 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.768534 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:42 crc kubenswrapper[4758]: E0130 08:30:42.768650 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.768677 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:42 crc kubenswrapper[4758]: E0130 08:30:42.768776 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:42 crc kubenswrapper[4758]: E0130 08:30:42.768858 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.777664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.777939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.778025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.778121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.778197 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880922 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880963 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880976 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.880984 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:42 crc kubenswrapper[4758]: I0130 08:30:42.983870 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:42Z","lastTransitionTime":"2026-01-30T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086204 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086227 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.086237 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188652 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.188737 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291543 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291599 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.291655 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.394556 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.496945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.496973 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.496985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.497000 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.497012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.599958 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702714 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.702729 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.721757 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:12:07.686911784 +0000 UTC Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.767845 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:43 crc kubenswrapper[4758]: E0130 08:30:43.767980 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.769213 4758 scope.go:117] "RemoveContainer" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806130 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806151 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.806163 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.908974 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.909379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.909618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.909878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:43 crc kubenswrapper[4758]: I0130 08:30:43.910098 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:43Z","lastTransitionTime":"2026-01-30T08:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012527 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.012543 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116214 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.116319 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218702 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218731 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218740 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218751 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.218794 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321497 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.321612 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423880 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.423982 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526309 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526352 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.526372 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628143 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628178 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.628210 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.723169 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:19:16.856169978 +0000 UTC Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.734884 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.767694 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.767819 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:44 crc kubenswrapper[4758]: E0130 08:30:44.767821 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:44 crc kubenswrapper[4758]: E0130 08:30:44.767899 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.767924 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:44 crc kubenswrapper[4758]: E0130 08:30:44.767961 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837363 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.837387 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938878 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:44 crc kubenswrapper[4758]: I0130 08:30:44.938898 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:44Z","lastTransitionTime":"2026-01-30T08:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.040996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.041032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.041060 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.041077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.041089 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.089801 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/2.log" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.090295 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/1.log" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.092379 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" exitCode=1 Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.092419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.092454 4758 scope.go:117] "RemoveContainer" containerID="beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.093032 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:30:45 crc kubenswrapper[4758]: E0130 08:30:45.093266 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.106303 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.115115 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.124679 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.135172 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142903 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.142973 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.146715 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.157357 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.169185 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.190948 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.204078 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.216194 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.227930 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.239772 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.244547 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.257596 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.270457 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.283847 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.297743 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.311292 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.335241 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.346354 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.449368 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552628 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552640 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.552666 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655206 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655264 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.655273 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.724108 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:26:04.919712975 +0000 UTC Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758021 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.758806 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.768399 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:45 crc kubenswrapper[4758]: E0130 08:30:45.768584 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.788152 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.799797 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.813935 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.826204 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.839957 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.854941 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860255 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.860286 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.872130 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.886500 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.897641 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.910795 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.921876 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.945236 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beade24036518a44a88d431539714e427603d1ec8f53888d62eaa532bc34f3c6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:29Z\\\",\\\"message\\\":\\\"olumns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {b21188fe-5483-4717-afe6-20a41a40b91a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:30:29.351524 6111 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352789 6111 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.353501 6111 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0130 08:30:29.353511 6111 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 08:30:29.353517 6111 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 08:30:29.352752 6111 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0130 08:30:29.353541 6111 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.891663ms\\\\nF0130 08:30:29.352795 6111 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.947533 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:45 crc kubenswrapper[4758]: E0130 08:30:45.947691 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:45 crc kubenswrapper[4758]: E0130 08:30:45.947758 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:01.947742061 +0000 UTC m=+66.920053632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.958574 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.961639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.961776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.961860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.961941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.962020 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:45Z","lastTransitionTime":"2026-01-30T08:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.980193 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:45 crc kubenswrapper[4758]: I0130 08:30:45.998502 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:45Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.009138 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:46Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.024185 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:46Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.035207 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:46Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064882 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.064906 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.096444 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/2.log" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167170 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167633 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.167721 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.270690 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.373879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.374460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.374585 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.374716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.374878 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.476956 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.552080 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.552213 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.552194544 +0000 UTC m=+83.524506095 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.552643 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.552821 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.552855 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.552960 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.553135 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.553120773 +0000 UTC m=+83.525432344 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.553236 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.553222546 +0000 UTC m=+83.525534097 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.579228 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.654361 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.654699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.654567 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655137 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655242 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.654932 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655359 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655377 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655441 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.655420709 +0000 UTC m=+83.627732270 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.655607 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:18.655588485 +0000 UTC m=+83.627900046 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682390 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682403 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.682435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.724833 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:37:28.300932022 +0000 UTC Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.767658 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.767783 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.767655 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.767671 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.767931 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:46 crc kubenswrapper[4758]: E0130 08:30:46.767980 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784739 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.784847 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887219 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.887255 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989635 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:46 crc kubenswrapper[4758]: I0130 08:30:46.989647 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:46Z","lastTransitionTime":"2026-01-30T08:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091644 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091654 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.091678 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.193984 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296369 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.296380 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.399378 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501639 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.501650 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.604272 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.706446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.725765 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:27:07.759762232 +0000 UTC Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.768585 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:47 crc kubenswrapper[4758]: E0130 08:30:47.768758 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.809532 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912669 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912690 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:47 crc kubenswrapper[4758]: I0130 08:30:47.912705 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:47Z","lastTransitionTime":"2026-01-30T08:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015591 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.015692 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.119965 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222868 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.222901 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.325917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.326013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.326033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.326100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.326122 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429653 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.429730 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532746 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532774 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.532824 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.635800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.636094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.636227 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.636368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.636497 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.726595 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:09:06.461208833 +0000 UTC Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.740560 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.767910 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.767987 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.768011 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:48 crc kubenswrapper[4758]: E0130 08:30:48.768154 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:48 crc kubenswrapper[4758]: E0130 08:30:48.768297 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:48 crc kubenswrapper[4758]: E0130 08:30:48.768461 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843657 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843668 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.843694 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948459 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948556 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:48 crc kubenswrapper[4758]: I0130 08:30:48.948641 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:48Z","lastTransitionTime":"2026-01-30T08:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.051678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.051915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.051987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.052071 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.052137 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154664 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.154830 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.214506 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.215313 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:30:49 crc kubenswrapper[4758]: E0130 08:30:49.215458 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.231666 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.246393 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257631 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257677 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.257716 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.266922 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.285991 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.303792 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.320666 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.337408 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.352796 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360599 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.360609 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.368400 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.397544 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.420226 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.436002 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.462898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.462954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.462973 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.462998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.463014 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.465849 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.483009 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.498484 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.520999 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.540630 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.559524 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:49Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566262 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566493 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566858 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.566996 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671426 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.671474 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.727107 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:03:42.730326562 +0000 UTC Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.767882 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:49 crc kubenswrapper[4758]: E0130 08:30:49.768441 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773482 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773508 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773539 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.773564 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876717 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.876845 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.980278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.980750 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.980952 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.981203 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:49 crc kubenswrapper[4758]: I0130 08:30:49.981404 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:49Z","lastTransitionTime":"2026-01-30T08:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.083885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.083962 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.083978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.083997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.084064 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187497 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.187537 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.290908 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.393991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.394072 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.394088 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.394109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.394129 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.500839 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.603838 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706795 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.706825 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.728251 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:22:12.893004759 +0000 UTC Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.767541 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.767562 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:50 crc kubenswrapper[4758]: E0130 08:30:50.767681 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.767869 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:50 crc kubenswrapper[4758]: E0130 08:30:50.767929 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:50 crc kubenswrapper[4758]: E0130 08:30:50.768122 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.808887 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911600 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:50 crc kubenswrapper[4758]: I0130 08:30:50.911629 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:50Z","lastTransitionTime":"2026-01-30T08:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014691 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.014876 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116899 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.116982 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.219493 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.322572 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425667 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.425812 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.528568 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.631910 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693483 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.693500 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.714828 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719419 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.719608 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.729017 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:37:50.035972504 +0000 UTC Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.762144 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.768407 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.768621 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774369 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.774413 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.805799 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810503 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.810542 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.828350 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.831948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.831977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.831989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.832006 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.832018 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.843976 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:51Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:51 crc kubenswrapper[4758]: E0130 08:30:51.844370 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845843 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845892 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.845916 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.948471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.948736 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.948850 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.948943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:51 crc kubenswrapper[4758]: I0130 08:30:51.949021 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:51Z","lastTransitionTime":"2026-01-30T08:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050899 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.050960 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153748 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.153807 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.256833 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359374 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.359418 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462204 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462275 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462297 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.462313 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.565383 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.667863 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.730457 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:50:29.652682684 +0000 UTC Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.768713 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:52 crc kubenswrapper[4758]: E0130 08:30:52.768914 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.769257 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.769297 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:52 crc kubenswrapper[4758]: E0130 08:30:52.769411 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:52 crc kubenswrapper[4758]: E0130 08:30:52.769488 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.770923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.770969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.770985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.771009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.771026 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873741 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873801 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.873857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:52 crc kubenswrapper[4758]: I0130 08:30:52.976925 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:52Z","lastTransitionTime":"2026-01-30T08:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.079666 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182373 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182386 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.182396 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284570 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284617 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.284656 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387165 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.387270 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.489934 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592487 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.592502 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.695345 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.731243 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:09:00.215245171 +0000 UTC Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.768793 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:53 crc kubenswrapper[4758]: E0130 08:30:53.768993 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.797791 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900836 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900890 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900929 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:53 crc kubenswrapper[4758]: I0130 08:30:53.900945 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:53Z","lastTransitionTime":"2026-01-30T08:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.003759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.004332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.004547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.004749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.004918 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108426 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108442 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108466 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.108485 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.210460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.210860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.211024 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.211272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.211468 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313733 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313764 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313774 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.313794 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416737 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.416883 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519285 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519339 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.519387 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.622989 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.623418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.623634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.624068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.624456 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727026 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727088 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727102 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.727111 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.731393 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:50:25.840520059 +0000 UTC Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.767904 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.767971 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:54 crc kubenswrapper[4758]: E0130 08:30:54.768017 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:54 crc kubenswrapper[4758]: E0130 08:30:54.768130 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.768647 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:54 crc kubenswrapper[4758]: E0130 08:30:54.769014 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.828990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.829254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.829358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.829462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.829570 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932591 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932608 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:54 crc kubenswrapper[4758]: I0130 08:30:54.932621 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:54Z","lastTransitionTime":"2026-01-30T08:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034893 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034955 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.034989 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137212 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.137324 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.239870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.240205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.240296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.240406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.240487 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343590 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.343792 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446505 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446544 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446558 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.446570 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548897 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548909 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548927 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.548939 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651378 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.651390 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.731879 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:48:38.707214196 +0000 UTC Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753190 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.753199 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.768716 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:55 crc kubenswrapper[4758]: E0130 08:30:55.768903 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.785370 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.797956 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.810027 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.823846 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.840358 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856915 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.856901 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.872022 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.884132 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.897193 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.908770 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.922368 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.935019 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.957773 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958850 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958898 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.958924 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:55Z","lastTransitionTime":"2026-01-30T08:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.970838 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:55 crc kubenswrapper[4758]: I0130 08:30:55.989229 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:55Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.002268 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.011477 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.026901 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:56Z is after 2025-08-24T17:21:41Z" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060404 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060449 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.060458 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162716 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162740 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.162751 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265463 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.265527 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367751 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.367823 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470278 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.470369 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.572562 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.572846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.572935 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.573016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.573124 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.677551 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.677876 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.678025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.678230 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.678374 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.732991 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:48:43.991617462 +0000 UTC Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.767594 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.767610 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:56 crc kubenswrapper[4758]: E0130 08:30:56.768414 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.767687 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:56 crc kubenswrapper[4758]: E0130 08:30:56.768601 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:56 crc kubenswrapper[4758]: E0130 08:30:56.768268 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.781943 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.782300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.782540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.782755 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.782965 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.886721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.887140 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.887326 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.887478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.887610 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990357 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:56 crc kubenswrapper[4758]: I0130 08:30:56.990369 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:56Z","lastTransitionTime":"2026-01-30T08:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.092923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.092978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.092996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.093023 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.093075 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.195566 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.297996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.298067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.298079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.298094 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.298105 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.400236 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.502983 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.503055 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.503070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.503086 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.503095 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.606340 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.606839 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.607033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.607377 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.607626 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710738 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.710756 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.734115 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:39:18.688885816 +0000 UTC Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.767732 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:57 crc kubenswrapper[4758]: E0130 08:30:57.768101 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813538 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813562 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.813579 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916854 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:57 crc kubenswrapper[4758]: I0130 08:30:57.916867 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:57Z","lastTransitionTime":"2026-01-30T08:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019579 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.019604 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121223 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121246 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.121258 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223446 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.223493 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326126 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.326211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.429365 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.531760 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.532103 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.532198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.532277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.532374 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.633948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.633982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.633991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.634004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.634012 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.734846 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:03:02.09998688 +0000 UTC Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.736244 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.768450 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:30:58 crc kubenswrapper[4758]: E0130 08:30:58.768598 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.768838 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:30:58 crc kubenswrapper[4758]: E0130 08:30:58.768939 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.769146 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:30:58 crc kubenswrapper[4758]: E0130 08:30:58.769245 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838819 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.838862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:58 crc kubenswrapper[4758]: I0130 08:30:58.941987 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:58Z","lastTransitionTime":"2026-01-30T08:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.052377 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.154111 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.154375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.154564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.155240 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.156000 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258224 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.258237 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360074 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360093 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.360105 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.462216 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.462621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.462786 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.462949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.463192 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.565998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.566032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.566069 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.566100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.566115 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668685 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.668785 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.736293 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:37:16.235032916 +0000 UTC Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.768283 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:30:59 crc kubenswrapper[4758]: E0130 08:30:59.768420 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771804 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771885 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771897 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.771933 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.873836 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976458 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:30:59 crc kubenswrapper[4758]: I0130 08:30:59.976530 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:30:59Z","lastTransitionTime":"2026-01-30T08:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078739 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.078840 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.181370 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283886 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.283895 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386693 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386766 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386788 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.386805 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489514 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.489699 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.591938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.592191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.592284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.592364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.592426 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694565 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.694609 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.737338 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:23:14.895848828 +0000 UTC Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.768294 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.768358 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:00 crc kubenswrapper[4758]: E0130 08:31:00.768440 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:00 crc kubenswrapper[4758]: E0130 08:31:00.768541 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.768724 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:00 crc kubenswrapper[4758]: E0130 08:31:00.769105 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.769653 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:31:00 crc kubenswrapper[4758]: E0130 08:31:00.770127 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797320 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.797346 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899762 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:00 crc kubenswrapper[4758]: I0130 08:31:00.899997 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:00Z","lastTransitionTime":"2026-01-30T08:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003770 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003819 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.003847 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107266 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107641 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107800 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.107864 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210080 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210153 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.210195 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.312900 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.312953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.312966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.312988 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.313004 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415704 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415715 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.415748 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518008 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518109 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.518123 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621516 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621548 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.621560 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724877 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.724963 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.737909 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:57:22.591915269 +0000 UTC Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.768496 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:01 crc kubenswrapper[4758]: E0130 08:31:01.768660 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827616 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.827692 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930215 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930251 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930273 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.930282 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.998237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.998648 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.998734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.998931 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:01 crc kubenswrapper[4758]: I0130 08:31:01.999080 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:01Z","lastTransitionTime":"2026-01-30T08:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.014820 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.014935 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.014989 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:31:34.014975591 +0000 UTC m=+98.987287142 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.015817 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020466 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.020529 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.036217 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040714 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040745 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040755 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.040782 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.055792 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060316 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060351 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.060383 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.074058 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077870 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077894 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077902 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077915 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.077925 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.092825 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:02Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.092996 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094761 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.094778 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197725 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197766 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197776 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.197799 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300751 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300930 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.300948 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403831 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.403920 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506864 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506950 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.506998 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609181 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609741 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.609806 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.712774 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.713252 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.713385 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.713519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.713640 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.739335 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:09:29.373276336 +0000 UTC Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.767777 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.767777 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.767883 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.768584 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.768707 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:02 crc kubenswrapper[4758]: E0130 08:31:02.768824 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816823 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816859 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.816875 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.919907 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.920306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.920392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.920481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:02 crc kubenswrapper[4758]: I0130 08:31:02.920593 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:02Z","lastTransitionTime":"2026-01-30T08:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023661 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023707 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023719 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023747 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.023758 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.126619 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.228993 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.229030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.229059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.229075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.229087 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331155 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331195 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.331230 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.432995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.433024 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.433046 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.433059 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.433068 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535376 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.535386 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637593 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.637627 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.739436 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:23:24.441041943 +0000 UTC Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740830 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.740855 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.768180 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:03 crc kubenswrapper[4758]: E0130 08:31:03.768322 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.843205 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:03 crc kubenswrapper[4758]: I0130 08:31:03.945472 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:03Z","lastTransitionTime":"2026-01-30T08:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.047941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.048197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.048284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.048367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.048450 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150274 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.150338 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.154253 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/0.log" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.154303 4758 generic.go:334] "Generic (PLEG): container finished" podID="fac75e9c-fc94-4c83-8613-bce0f4744079" containerID="0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb" exitCode=1 Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.154367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerDied","Data":"0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.155095 4758 scope.go:117] "RemoveContainer" containerID="0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.178970 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.191719 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.219380 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.230821 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.244644 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.252688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.252818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.252901 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.252982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.253086 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.261261 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.275951 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.300932 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.321989 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.334498 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.346755 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.355856 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.362088 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.375185 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.386731 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.397990 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.410616 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.424663 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.441679 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:04Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458628 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.458663 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561842 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.561853 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664151 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.664188 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.740428 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:48:30.297152265 +0000 UTC Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766244 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.766271 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.767535 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.767592 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:04 crc kubenswrapper[4758]: E0130 08:31:04.767639 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.767686 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:04 crc kubenswrapper[4758]: E0130 08:31:04.767724 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:04 crc kubenswrapper[4758]: E0130 08:31:04.767865 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868550 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.868583 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971075 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971107 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:04 crc kubenswrapper[4758]: I0130 08:31:04.971136 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:04Z","lastTransitionTime":"2026-01-30T08:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073243 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073255 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.073264 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.159203 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/0.log" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.159263 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.174588 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175635 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175650 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.175661 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.192027 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.208115 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.225502 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.241006 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.258772 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.269982 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277505 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.277535 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.281975 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.295426 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.312913 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.322475 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.335738 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.346748 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.366897 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.377597 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.378938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.378979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.378991 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.379005 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.379014 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.387204 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.396406 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.406079 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481316 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.481346 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583560 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.583600 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686308 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.686385 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.740986 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:13:10.530927876 +0000 UTC Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.768536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:05 crc kubenswrapper[4758]: E0130 08:31:05.768802 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.781499 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.788783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.788888 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.788952 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.789012 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.789097 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.802512 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.814760 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.824070 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.835970 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.846478 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.861743 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.873321 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.885105 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.890985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.891092 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.891295 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.891496 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.891647 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.896274 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.906866 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.916829 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.926203 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.937770 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.953001 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.967332 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.981682 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:05Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994247 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994299 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:05 crc kubenswrapper[4758]: I0130 08:31:05.994339 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:05Z","lastTransitionTime":"2026-01-30T08:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.004081 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:06Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095938 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.095967 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.197871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.198437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.198598 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.198797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.199029 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302203 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302246 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.302268 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404510 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404548 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404560 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404578 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.404590 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507590 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507689 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.507836 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612248 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612271 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612299 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.612321 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.714646 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.741877 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:22:15.22525794 +0000 UTC Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.767912 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:06 crc kubenswrapper[4758]: E0130 08:31:06.768059 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.768122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:06 crc kubenswrapper[4758]: E0130 08:31:06.768179 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.768223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:06 crc kubenswrapper[4758]: E0130 08:31:06.768273 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816746 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816782 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.816821 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919352 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919744 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:06 crc kubenswrapper[4758]: I0130 08:31:06.919819 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:06Z","lastTransitionTime":"2026-01-30T08:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.021995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.022249 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.022313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.022378 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.022435 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124481 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124509 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.124518 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226922 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.226953 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329000 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329047 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329056 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329073 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.329088 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431425 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431464 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.431487 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.534964 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638502 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638768 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.638980 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.742196 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 01:30:44.545657582 +0000 UTC Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.742761 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.742906 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.742977 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.743068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.743165 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.768247 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:07 crc kubenswrapper[4758]: E0130 08:31:07.768384 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.779546 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846060 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.846826 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949883 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949916 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949940 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:07 crc kubenswrapper[4758]: I0130 08:31:07.949950 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:07Z","lastTransitionTime":"2026-01-30T08:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058021 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058105 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058148 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.058163 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.160834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.161161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.161231 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.161302 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.161359 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263813 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263853 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263875 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.263883 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.365898 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467782 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467841 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.467852 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570428 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570454 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.570465 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672371 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.672393 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.742675 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:39:57.775086257 +0000 UTC Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.768549 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.768604 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:08 crc kubenswrapper[4758]: E0130 08:31:08.768657 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.768549 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:08 crc kubenswrapper[4758]: E0130 08:31:08.768737 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:08 crc kubenswrapper[4758]: E0130 08:31:08.768821 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774360 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774388 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774398 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.774421 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.877957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.877990 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.877998 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.878011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.878020 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980763 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980777 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:08 crc kubenswrapper[4758]: I0130 08:31:08.980788 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:08Z","lastTransitionTime":"2026-01-30T08:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083352 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083362 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.083385 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186332 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186390 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186407 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.186418 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289681 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289747 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.289758 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.392486 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494767 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494811 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494826 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.494835 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.596827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.597058 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.597227 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.597325 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.597394 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.699856 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.700118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.700191 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.700258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.700323 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.743427 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:38:08.576440834 +0000 UTC Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.767748 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:09 crc kubenswrapper[4758]: E0130 08:31:09.767889 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802557 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.802637 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907921 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:09 crc kubenswrapper[4758]: I0130 08:31:09.907956 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:09Z","lastTransitionTime":"2026-01-30T08:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011003 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011097 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011113 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.011148 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113756 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113784 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.113814 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216358 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.216368 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319353 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.319373 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.422325 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.525446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628178 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628287 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.628309 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731197 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731247 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.731266 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.743882 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:09:05.033447475 +0000 UTC Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.768412 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.768432 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.768512 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:10 crc kubenswrapper[4758]: E0130 08:31:10.768534 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:10 crc kubenswrapper[4758]: E0130 08:31:10.768679 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:10 crc kubenswrapper[4758]: E0130 08:31:10.768827 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833581 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833629 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833642 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.833677 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937034 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937135 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:10 crc kubenswrapper[4758]: I0130 08:31:10.937229 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:10Z","lastTransitionTime":"2026-01-30T08:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039583 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039628 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.039645 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142420 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142429 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.142454 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245213 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245229 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.245240 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348281 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348289 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.348314 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452489 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452542 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452557 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.452567 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555390 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555462 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.555502 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.658207 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.744800 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:52:44.619348959 +0000 UTC Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761344 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761374 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.761405 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.768438 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:11 crc kubenswrapper[4758]: E0130 08:31:11.768597 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863816 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863855 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.863891 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966380 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966477 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966495 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:11 crc kubenswrapper[4758]: I0130 08:31:11.966536 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:11Z","lastTransitionTime":"2026-01-30T08:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.069965 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.070097 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.070122 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.070149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.070168 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173322 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.173368 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193723 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193769 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193788 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193810 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.193827 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.215690 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220803 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.220922 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.241596 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247341 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247400 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247451 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.247472 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.267755 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272729 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272796 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.272862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.292505 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298361 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298402 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298432 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.298448 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.320826 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:12Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.321097 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323270 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.323280 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.426636 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530189 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.530326 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633258 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.633331 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736116 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736186 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736207 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.736223 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.745481 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:16:03.038673039 +0000 UTC Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.767602 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.767647 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.767604 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.767787 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.767887 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:12 crc kubenswrapper[4758]: E0130 08:31:12.768029 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839444 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839457 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839476 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.839490 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.942585 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.942926 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.943205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.943414 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:12 crc kubenswrapper[4758]: I0130 08:31:12.943626 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:12Z","lastTransitionTime":"2026-01-30T08:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.046947 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.046997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.047011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.047031 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.047080 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150247 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.150283 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.273936 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.273992 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.274009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.274033 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.274081 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376594 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376615 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.376623 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479483 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.479541 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582324 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582381 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582397 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.582438 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685672 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685693 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685724 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.685746 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.746438 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:53:37.717942551 +0000 UTC Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.767928 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:13 crc kubenswrapper[4758]: E0130 08:31:13.768181 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788236 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788288 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788304 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.788343 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891218 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.891363 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993123 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993184 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993200 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993226 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:13 crc kubenswrapper[4758]: I0130 08:31:13.993240 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:13Z","lastTransitionTime":"2026-01-30T08:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095343 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.095369 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197491 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197521 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197541 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.197551 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.299972 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.300014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.300025 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.300064 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.300079 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.402993 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.403144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.403165 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.403194 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.403213 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505656 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.505734 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.609330 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.609659 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.609881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.610134 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.610404 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.713848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.714201 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.714433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.714771 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.715153 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.747336 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:47:50.137034645 +0000 UTC Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.768295 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.768327 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:14 crc kubenswrapper[4758]: E0130 08:31:14.768501 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.768538 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:14 crc kubenswrapper[4758]: E0130 08:31:14.769130 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:14 crc kubenswrapper[4758]: E0130 08:31:14.769234 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.769880 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819611 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819638 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.819649 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922695 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922794 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:14 crc kubenswrapper[4758]: I0130 08:31:14.922846 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:14Z","lastTransitionTime":"2026-01-30T08:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.029879 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.029934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.029951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.030016 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.030049 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133279 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133346 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133356 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133392 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.133413 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.192384 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/2.log" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.201499 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.202860 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.225788 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240367 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240387 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.240399 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.251687 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.278162 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.298748 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.312791 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.330200 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.342945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.343004 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.343020 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.343087 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.343107 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.348900 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.367813 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8346f8b9-00a5-4192-aac9-4efed4127d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.402382 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.422870 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.436764 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446501 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446518 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446549 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.446568 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.452570 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.470531 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.485720 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.507487 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.522913 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.544906 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550180 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550267 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550296 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.550317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.561750 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.579009 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.656966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.657054 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.657070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.657091 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.657107 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.747677 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:49:24.274417424 +0000 UTC Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760089 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760141 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.760172 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.768786 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:15 crc kubenswrapper[4758]: E0130 08:31:15.769106 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.785619 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.801309 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.819721 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.840634 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.857345 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864291 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864329 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864348 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.864378 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.875852 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.894332 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.911478 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.930522 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.950456 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.964828 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967709 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967745 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967758 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.967792 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:15Z","lastTransitionTime":"2026-01-30T08:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:15 crc kubenswrapper[4758]: I0130 08:31:15.979601 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8346f8b9-00a5-4192-aac9-4efed4127d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:15Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.007682 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.024286 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.048656 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070863 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070874 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070896 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.070910 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.074120 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.095431 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.110998 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.128706 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:16Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173480 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173537 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173554 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.173564 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276775 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276845 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276863 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276917 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.276936 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.380589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.381079 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.381276 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.381427 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.381577 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484602 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484612 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.484638 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587895 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587972 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.587983 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.691953 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.692393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.692526 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.692708 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.692849 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.748581 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:23:22.135372563 +0000 UTC Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.768101 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.768135 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:16 crc kubenswrapper[4758]: E0130 08:31:16.768725 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:16 crc kubenswrapper[4758]: E0130 08:31:16.768943 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.768306 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:16 crc kubenswrapper[4758]: E0130 08:31:16.769546 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796090 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.796097 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899743 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899846 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:16 crc kubenswrapper[4758]: I0130 08:31:16.899859 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:16Z","lastTransitionTime":"2026-01-30T08:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003740 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.003795 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.107821 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.107914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.107945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.107985 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.108010 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.210928 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.211129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.211160 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.211198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.211226 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.212994 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.213767 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/2.log" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.217435 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" exitCode=1 Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.217508 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.217577 4758 scope.go:117] "RemoveContainer" containerID="bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.219074 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:31:17 crc kubenswrapper[4758]: E0130 08:31:17.219395 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.241486 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.258132 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8346f8b9-00a5-4192-aac9-4efed4127d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.282564 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.298502 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.310851 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314622 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314688 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314728 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.314743 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.328758 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.351674 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.366524 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.380894 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zv8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gj6b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.407975 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d88a0eb-98f3-4e2b-b076-4454822dbea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 08:29:59.356295 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 08:29:59.357844 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4008458581/tls.crt::/tmp/serving-cert-4008458581/tls.key\\\\\\\"\\\\nI0130 08:30:14.598424 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 08:30:14.604308 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 08:30:14.604348 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 08:30:14.604412 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 08:30:14.604428 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 08:30:14.617154 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0130 08:30:14.617173 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0130 08:30:14.617198 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 08:30:14.617218 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 08:30:14.617225 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 08:30:14.617231 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 08:30:14.617237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0130 08:30:14.620361 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417624 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417691 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.417722 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.426252 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.445674 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25ba3ea5743b74ab2d7c30de419dac1fd12e8a79f191e6031e9b1800a52daec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d9880a6366d2ec4772fe470a931ac12a1d27d2fe45c638dca627bcecc89de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.465623 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998ccecdf4d19820e681be4e7a6783f9a8b0938356aa34ec4d850cb8561e7697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.483804 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95cfcde3-10c8-4ece-a78a-9508f04a0f09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772c292536580b6837122447a228c0e410f0b52adb16f39ca857395164d86006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwcdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2nkwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.503969 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-99ddw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac75e9c-fc94-4c83-8613-bce0f4744079\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:03Z\\\",\\\"message\\\":\\\"2026-01-30T08:30:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc\\\\n2026-01-30T08:30:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7ea6ecc7-ce05-4ae8-9787-6f4f6c3520bc to /host/opt/cni/bin/\\\\n2026-01-30T08:30:18Z [verbose] multus-daemon started\\\\n2026-01-30T08:30:18Z [verbose] Readiness Indicator file check\\\\n2026-01-30T08:31:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:31:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cznc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-99ddw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.520646 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0944aacc-db22-4503-990b-f5724b55d4ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3b34e127f22836eaeb373e263bcf94f2dc77f9b29aacc70deb8ee12d0eb0155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1574fb8779289f89e0f59b42bb378b9e8e8397cd1cc66cb97eaa954cdfdf0239\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96fvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bx4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521231 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521298 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521331 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.521358 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.541401 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17cbbbe6-bf6d-4617-a47d-bf38311c48bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba18fac1c9d7bd89dea2486c5e5283a4f2b77022beb2e6ac4d494687885928d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be331d9f22a0ba40be15adab3e2bf17c67723bd260819d8a6262cdefdb2b5cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88bb8597ca27118b1eb9bcaf8dc21c3ae6916bcd93c3946a38de03f603beea52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.561565 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299edc55848367c0987e733ea882c089527a78d899b741d543c8eb8b598af443\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.585302 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a682aa56-1a48-46dd-a06c-8cbaaeea7008\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea348fb4daf75569887acac3834c0f110989fc6456b818e3e7d30ed46fcf037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:30:44Z\\\",\\\"message\\\":\\\" controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:30:44Z is after 2025-08-24T17:21:41Z]\\\\nI0130 08:30:44.853690 6301 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T08:31:16Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 08:31:16.166408 6702 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-d2cb9\\\\nI0130 08:31:16.166421 6702 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-d2cb9\\\\nI0130 08:31:16.166437 6702 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-d2cb9 in node crc\\\\nI0130 08:31:16.166449 6702 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-d2cb9 after 0 failed attempt(s)\\\\nI0130 08:31:16.166455 6702 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-d2cb9\\\\nI0130 08:31:16.166465 6702 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq\\\\nI0130 08:31:16.166470 6702 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq\\\\nI0130 08:31:16.166476 6702 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq in node crc\\\\nI0130 08:31:16.166481 6702 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq after \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T08:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jmdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-d2cb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:17Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624294 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624689 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624791 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.624875 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728835 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728905 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728945 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.728960 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.749303 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:56:17.452510746 +0000 UTC Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.768400 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:17 crc kubenswrapper[4758]: E0130 08:31:17.768606 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831694 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831780 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.831797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934597 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:17 crc kubenswrapper[4758]: I0130 08:31:17.934620 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:17Z","lastTransitionTime":"2026-01-30T08:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038467 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038541 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.038587 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142452 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142517 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142535 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142564 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.142585 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.224497 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246401 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246540 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.246575 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.348697 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.349158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.349352 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.349521 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.349675 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452618 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452663 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452676 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.452711 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556725 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556808 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.556903 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.598614 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.598794 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.598761506 +0000 UTC m=+147.571073087 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.599402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.599624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.599708 4758 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.599992 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.599972623 +0000 UTC m=+147.572284214 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.599845 4758 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.600436 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.600418707 +0000 UTC m=+147.572730288 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659531 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659550 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.659567 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.700877 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.700952 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701216 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701247 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701267 4758 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701334 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.701313035 +0000 UTC m=+147.673624626 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701577 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701667 4758 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701737 4758 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.701881 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:22.701859901 +0000 UTC m=+147.674171542 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.750947 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:48:51.699635549 +0000 UTC Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.761997 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.762129 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.762205 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.762282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.762351 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.768527 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.768630 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.768677 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.768788 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.768925 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:18 crc kubenswrapper[4758]: E0130 08:31:18.769158 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866101 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866574 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.866764 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969299 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969522 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969647 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:18 crc kubenswrapper[4758]: I0130 08:31:18.969712 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:18Z","lastTransitionTime":"2026-01-30T08:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.071920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.071966 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.071979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.071996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.072011 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174018 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174070 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174078 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174090 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.174098 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276067 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276164 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.276207 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378413 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378436 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.378446 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480925 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.480971 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582803 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582844 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.582896 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685649 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685663 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685683 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.685696 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.752694 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:45:00.588650652 +0000 UTC Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.768263 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:19 crc kubenswrapper[4758]: E0130 08:31:19.768418 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787523 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787563 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787573 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.787598 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889336 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889396 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889412 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.889423 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992701 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992814 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:19 crc kubenswrapper[4758]: I0130 08:31:19.992891 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:19Z","lastTransitionTime":"2026-01-30T08:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.095860 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.096208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.096317 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.096421 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.096565 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198923 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198946 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.198955 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.301417 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.301703 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.301792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.301887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.302022 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404632 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404700 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.404711 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506292 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506327 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506335 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.506360 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609527 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609545 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.609558 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711840 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711862 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.711871 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.753100 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:13:01.636635978 +0000 UTC Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.768456 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.768536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.768577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:20 crc kubenswrapper[4758]: E0130 08:31:20.768632 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:20 crc kubenswrapper[4758]: E0130 08:31:20.768696 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:20 crc kubenswrapper[4758]: E0130 08:31:20.768838 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813941 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813960 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.813968 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:20 crc kubenswrapper[4758]: I0130 08:31:20.916433 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:20Z","lastTransitionTime":"2026-01-30T08:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.018992 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.019065 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.019083 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.019100 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.019107 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120572 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120601 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120608 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.120629 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223088 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223144 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223161 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.223173 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325735 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325764 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325773 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325785 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.325793 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427547 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427587 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427596 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427610 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.427619 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529781 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529790 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.529816 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631741 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631757 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.631766 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.733872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.734011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.734085 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.734114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.734131 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.754217 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:02:30.765953748 +0000 UTC Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.768657 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:21 crc kubenswrapper[4758]: E0130 08:31:21.769228 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836568 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836609 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.836645 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.938954 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.939009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.939021 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.939039 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:21 crc kubenswrapper[4758]: I0130 08:31:21.939050 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:21Z","lastTransitionTime":"2026-01-30T08:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041619 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041658 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041675 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041690 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.041700 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143525 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143569 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143577 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143595 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.143604 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245655 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.245732 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348494 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348551 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348566 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348588 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.348604 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429364 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.429375 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.442770 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447259 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447272 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.447324 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.459490 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.462982 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.463049 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.463062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.463133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.463149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.475512 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478778 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478809 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.478841 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.491796 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495305 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495338 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495350 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495365 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.495377 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.508354 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T08:31:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"17f3e5ed-5a91-4942-8912-7cdc4bc4d7ef\\\",\\\"systemUUID\\\":\\\"4febaf4d-16fb-4d22-878e-0234bcbe9a79\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:22Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.508505 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509858 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509939 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.509951 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611592 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611633 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611645 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611662 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.611674 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714310 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714395 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.714409 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.754996 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:55:03.538813139 +0000 UTC Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.768407 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.768447 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.768528 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.768648 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.768749 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:22 crc kubenswrapper[4758]: E0130 08:31:22.768813 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817171 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817208 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817234 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.817243 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918620 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918631 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918646 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:22 crc kubenswrapper[4758]: I0130 08:31:22.918658 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:22Z","lastTransitionTime":"2026-01-30T08:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020582 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020621 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020634 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.020643 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123148 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123277 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123306 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.123317 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226225 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.226261 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328382 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328422 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328434 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.328461 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430802 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430828 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.430842 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532912 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.532943 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634865 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634910 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634921 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634937 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.634948 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737500 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.737528 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.755269 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:07:53.63138276 +0000 UTC Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.767955 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:23 crc kubenswrapper[4758]: E0130 08:31:23.768103 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.839978 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.840014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.840022 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.840057 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.840070 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943104 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943159 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943168 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943182 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:23 crc kubenswrapper[4758]: I0130 08:31:23.943192 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:23Z","lastTransitionTime":"2026-01-30T08:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045824 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045847 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.045856 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147837 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147850 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147867 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.147879 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250484 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250525 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250552 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.250564 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.353951 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.354015 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.354076 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.354108 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.354131 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456569 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456589 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.456602 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559354 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559399 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559413 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559433 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.559447 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661337 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661398 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661430 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.661459 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.755633 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:33:04.707023596 +0000 UTC Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.763567 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.763732 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.763820 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.763918 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.764084 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.768069 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.768111 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:24 crc kubenswrapper[4758]: E0130 08:31:24.768164 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.768068 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:24 crc kubenswrapper[4758]: E0130 08:31:24.768277 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:24 crc kubenswrapper[4758]: E0130 08:31:24.768311 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865827 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865861 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865887 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.865897 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967710 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967754 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967783 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:24 crc kubenswrapper[4758]: I0130 08:31:24.967796 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:24Z","lastTransitionTime":"2026-01-30T08:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070415 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070471 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070492 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.070508 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174030 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174156 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174179 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.174196 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276438 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276511 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276532 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.276571 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379077 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379121 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379133 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.379161 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481114 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481150 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481162 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.481191 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.583643 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.583969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.584211 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.584478 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.584908 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687254 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687321 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687345 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.687396 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.756117 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:59:47.619297803 +0000 UTC Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.767601 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:25 crc kubenswrapper[4758]: E0130 08:31:25.768027 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.779961 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30736e17-2a41-408e-a9f3-220a96c60b29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3cd369ce54291f9db5a406232dbf2f434a6a4ee25e54333ecaf15d6029fa7b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be3979deeb3a683d5d343075def8d383b9487b5792243b917b8b7d526720119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fa20b912ba2a95e1073030f547aa20dce2fb067dac0a0f70a2151ce5f1a11e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba5ef676da5ac9f34b63c2e2fc3fa701175e9c583c6ccdbf402295e571742cb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789706 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789743 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789756 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.789765 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.791673 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8346f8b9-00a5-4192-aac9-4efed4127d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b50869d10f4f4f4973ade69dd2e55d54e956644a0bf21aebdca1d742570dff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://197709cd8ffdd38e2b61df570a900749d4f50f7ca63571a79bcff1d2287768a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.814402 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b59a1ed9-a43b-40fc-a0dc-7c93b25b2af4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3cb03041a8cd379e10a62b3528fb13aa75848b8ae039b90886b87796e93be7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74600699dd4d0293cfc1b034fffb0d11a7aaadb7981a432c377dcb0e77a1e757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fabce6f1b0cbb34dedec213308431d87c07522f11261295407884cbff432972b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc52e404216c66b8e72b269a4f1dad0c47f2c42f80bd473c7a47d0c5f2227202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37014698b9ecaa63f0e081613a26d5ac4caed8bcba70cbfba23b1988056583b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0d818d9a8250be47cf6f5694af56ab909d470f474f5c99fe9212f09989881a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad286f75bb26f70fb8ff046c146521a31bb481dfb6a2185a3973470433a84095\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4c4648e8ea3a26894b9d796dfabe68d02b0013d362e082087b66fde5e421989\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:29:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:29:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:29:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.829635 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.839359 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8z796" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"452effb5-a499-4c47-a71d-12198ffa37c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://658283fa557e7977aedfa9a524e94a6d88793328b7f93e177964749b77d9ae67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w4ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8z796\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.851468 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518ee414-95c2-4ee2-8bef-bd1af1d5afb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1105c17c7b8a2ca276eba6684a8f8c349c328ef550cd65a5e4a9de8b141f872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e34af7c31fe99dac7193632915122eaea8501749fe676a4643d64c4958fd872\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7760011d825ac9fa1832450185d4bbb853fc5fb4448b517a712b789254c5924a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5761e2987880d1175a2136369c4bff3c212901c86c1686eae420ec9d8ad159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://430a0d4d83c5117d6df61607dd01469a481e8cff1b83d0f7b996380f87b40f39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41e5dbe2cba2e75c4255adebc86d8eb818289d0414302fdb26117233fbf99792\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a48d40f13b9db68adbab76409dde30982a1f3ba9faff74e7836e1c25407a06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T08:30:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T08:30:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m99tv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6t8nj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.862617 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.874285 4758 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lnh2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28b9f864-7294-4168-8200-3dbba23ffc97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T08:30:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df4c68b25412dbf183f11080147930ae20c44328773f3f07134d280a0493859f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T08:30:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb85r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T08:30:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lnh2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T08:31:25Z is after 2025-08-24T17:21:41Z" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893696 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893718 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.893726 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.913405 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.913384529 podStartE2EDuration="1m11.913384529s" podCreationTimestamp="2026-01-30 08:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.910637816 +0000 UTC m=+90.882949367" watchObservedRunningTime="2026-01-30 08:31:25.913384529 +0000 UTC m=+90.885696080" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.958392 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podStartSLOduration=70.958372002 podStartE2EDuration="1m10.958372002s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.957067063 +0000 UTC m=+90.929378614" watchObservedRunningTime="2026-01-30 08:31:25.958372002 +0000 UTC m=+90.930683563" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.982477 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-99ddw" podStartSLOduration=70.982462627 podStartE2EDuration="1m10.982462627s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.97074804 +0000 UTC m=+90.943059601" watchObservedRunningTime="2026-01-30 08:31:25.982462627 +0000 UTC m=+90.954774178" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.995758 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.996068 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.996198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.996303 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:25 crc kubenswrapper[4758]: I0130 08:31:25.996404 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:25Z","lastTransitionTime":"2026-01-30T08:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.003186 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.003168419 podStartE2EDuration="1m6.003168419s" podCreationTimestamp="2026-01-30 08:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:26.002666374 +0000 UTC m=+90.974977925" watchObservedRunningTime="2026-01-30 08:31:26.003168419 +0000 UTC m=+90.975479980" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.003740 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bx4hq" podStartSLOduration=71.003732826 podStartE2EDuration="1m11.003732826s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:25.986941454 +0000 UTC m=+90.959253005" watchObservedRunningTime="2026-01-30 08:31:26.003732826 +0000 UTC m=+90.976044377" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098368 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098408 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098418 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098450 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.098463 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200098 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200333 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200432 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200528 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.200611 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303159 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303394 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303455 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303519 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.303651 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405797 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405825 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405836 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405852 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.405862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.507884 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.508169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.508244 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.508315 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.508399 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610375 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610534 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.610611 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713147 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713185 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713196 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.713220 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.757055 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:25:22.909632842 +0000 UTC Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.768426 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.768465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.768465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:26 crc kubenswrapper[4758]: E0130 08:31:26.768580 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:26 crc kubenswrapper[4758]: E0130 08:31:26.768719 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:26 crc kubenswrapper[4758]: E0130 08:31:26.768806 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.818914 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.818975 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.818987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.819013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.819025 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921416 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921460 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921490 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:26 crc kubenswrapper[4758]: I0130 08:31:26.921499 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:26Z","lastTransitionTime":"2026-01-30T08:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023752 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023805 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023823 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.023834 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126312 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126355 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.126381 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.228971 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.229002 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.229011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.229024 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.229032 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331082 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331119 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331139 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.331149 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433757 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433818 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433833 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.433862 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536174 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536183 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536198 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.536211 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637881 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637889 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637903 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.637914 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740311 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740349 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740359 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740372 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.740381 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.757822 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:11:14.305386499 +0000 UTC Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.768319 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:27 crc kubenswrapper[4758]: E0130 08:31:27.768440 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.769740 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:31:27 crc kubenswrapper[4758]: E0130 08:31:27.769988 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.784934 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lnh2g" podStartSLOduration=72.784910919 podStartE2EDuration="1m12.784910919s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.784540068 +0000 UTC m=+92.756851629" watchObservedRunningTime="2026-01-30 08:31:27.784910919 +0000 UTC m=+92.757222500" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842166 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842220 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842237 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.842277 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.869398 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.869375716 podStartE2EDuration="1m10.869375716s" podCreationTimestamp="2026-01-30 08:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.868568051 +0000 UTC m=+92.840879612" watchObservedRunningTime="2026-01-30 08:31:27.869375716 +0000 UTC m=+92.841687307" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.870484 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.870475669 podStartE2EDuration="20.870475669s" podCreationTimestamp="2026-01-30 08:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.841473425 +0000 UTC m=+92.813784976" watchObservedRunningTime="2026-01-30 08:31:27.870475669 +0000 UTC m=+92.842787240" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.894362 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8z796" podStartSLOduration=72.894335758 podStartE2EDuration="1m12.894335758s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.89344643 +0000 UTC m=+92.865757981" watchObservedRunningTime="2026-01-30 08:31:27.894335758 +0000 UTC m=+92.866647309" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.914602 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6t8nj" podStartSLOduration=72.914582265 podStartE2EDuration="1m12.914582265s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.913944926 +0000 UTC m=+92.886256517" watchObservedRunningTime="2026-01-30 08:31:27.914582265 +0000 UTC m=+92.886893826" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.930983 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.930964765 podStartE2EDuration="45.930964765s" podCreationTimestamp="2026-01-30 08:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:27.928429448 +0000 UTC m=+92.900741009" watchObservedRunningTime="2026-01-30 08:31:27.930964765 +0000 UTC m=+92.903276316" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945032 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945104 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945115 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945136 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:27 crc kubenswrapper[4758]: I0130 08:31:27.945148 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:27Z","lastTransitionTime":"2026-01-30T08:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048442 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048507 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048551 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.048573 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152410 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152498 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152524 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.152602 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256812 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256873 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256891 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256920 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.256939 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360447 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360512 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360559 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.360579 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464369 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464424 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.464447 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568124 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568192 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568210 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568239 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.568257 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672686 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672745 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672759 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672782 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.672797 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.758600 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:40:15.046418752 +0000 UTC Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.768120 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.768175 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.768122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:28 crc kubenswrapper[4758]: E0130 08:31:28.768285 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:28 crc kubenswrapper[4758]: E0130 08:31:28.768374 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:28 crc kubenswrapper[4758]: E0130 08:31:28.768498 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776154 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776242 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776263 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776290 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.776307 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878468 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878506 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878515 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.878540 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981765 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981834 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981849 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981872 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:28 crc kubenswrapper[4758]: I0130 08:31:28.981887 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:28Z","lastTransitionTime":"2026-01-30T08:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084871 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084949 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084970 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.084985 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188313 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188328 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188347 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.188358 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291680 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291699 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291726 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.291744 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395177 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395232 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395260 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395282 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.395295 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538379 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538448 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538472 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.538528 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.641934 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.641986 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.642009 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.642062 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.642082 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745319 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745384 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745404 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745431 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.745451 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.759739 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:20:04.029996206 +0000 UTC Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.769387 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:29 crc kubenswrapper[4758]: E0130 08:31:29.769605 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848017 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848099 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848118 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848145 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.848165 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.951948 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.951996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.952013 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.952043 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:29 crc kubenswrapper[4758]: I0130 08:31:29.952085 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:29Z","lastTransitionTime":"2026-01-30T08:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054666 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054777 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054792 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054817 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.054830 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157245 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157280 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157293 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157308 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.157318 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260393 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260439 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260451 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260465 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.260478 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364169 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364250 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364300 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.364319 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467223 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467269 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467284 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467305 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.467320 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570437 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570488 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570504 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570520 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.570532 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672591 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672637 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672651 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672671 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.672685 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.760537 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:27:02.496845005 +0000 UTC Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.767851 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.767881 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.767975 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:30 crc kubenswrapper[4758]: E0130 08:31:30.768076 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:30 crc kubenswrapper[4758]: E0130 08:31:30.768189 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:30 crc kubenswrapper[4758]: E0130 08:31:30.768248 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775120 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775149 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775158 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775172 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.775184 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877682 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877721 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.877739 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980318 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980357 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980370 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980386 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:30 crc kubenswrapper[4758]: I0130 08:31:30.980396 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:30Z","lastTransitionTime":"2026-01-30T08:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082485 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082536 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082553 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.082594 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.185932 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.185969 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.185979 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.185995 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.186005 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288848 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288911 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288924 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288944 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.288957 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392366 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392406 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392423 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392445 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.392464 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495199 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495233 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495246 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495261 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.495271 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.597957 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.597987 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.597996 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.598011 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.598029 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700573 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700604 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700613 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700627 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.700637 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.761583 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:31:47.557828449 +0000 UTC Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.768250 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:31 crc kubenswrapper[4758]: E0130 08:31:31.768391 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803749 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803793 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803806 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803822 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.803834 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906403 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906442 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906453 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906473 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:31 crc kubenswrapper[4758]: I0130 08:31:31.906486 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:31Z","lastTransitionTime":"2026-01-30T08:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009014 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009096 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009110 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009132 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.009151 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113007 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113127 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113152 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113187 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.113216 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217461 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217555 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217575 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217606 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.217628 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321698 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321779 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321798 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321829 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.321857 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434530 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434625 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434647 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434678 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.434699 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531623 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531687 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531705 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531734 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.531755 4758 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T08:31:32Z","lastTransitionTime":"2026-01-30T08:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.607732 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9"] Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.608622 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.612237 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.613433 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.616156 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.617267 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.666781 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.666878 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.666940 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768774e5-5df7-470d-a77b-593638e3a4ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.666973 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768774e5-5df7-470d-a77b-593638e3a4ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.667075 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768774e5-5df7-470d-a77b-593638e3a4ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.762576 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:15:34.783933446 +0000 UTC Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.762687 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767769 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768774e5-5df7-470d-a77b-593638e3a4ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767769 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767919 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767974 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.768090 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768774e5-5df7-470d-a77b-593638e3a4ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.768142 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768774e5-5df7-470d-a77b-593638e3a4ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: E0130 08:31:32.768180 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.768406 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.767789 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.768539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/768774e5-5df7-470d-a77b-593638e3a4ef-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: E0130 08:31:32.768638 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.769685 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:32 crc kubenswrapper[4758]: E0130 08:31:32.769882 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.770120 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/768774e5-5df7-470d-a77b-593638e3a4ef-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.777553 4758 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.789339 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/768774e5-5df7-470d-a77b-593638e3a4ef-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.800990 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/768774e5-5df7-470d-a77b-593638e3a4ef-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rljx9\" (UID: \"768774e5-5df7-470d-a77b-593638e3a4ef\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:32 crc kubenswrapper[4758]: I0130 08:31:32.930836 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" Jan 30 08:31:33 crc kubenswrapper[4758]: I0130 08:31:33.278176 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" event={"ID":"768774e5-5df7-470d-a77b-593638e3a4ef","Type":"ContainerStarted","Data":"ca565fbed1ef3c3da3074606a8709d22a9c1b470e012567a28f57f1e8dee4ce4"} Jan 30 08:31:33 crc kubenswrapper[4758]: I0130 08:31:33.278289 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" event={"ID":"768774e5-5df7-470d-a77b-593638e3a4ef","Type":"ContainerStarted","Data":"3874c30d204eb3c13fef0f9026749f04ec416e00f6b2f287573a49aa551462c8"} Jan 30 08:31:33 crc kubenswrapper[4758]: I0130 08:31:33.768663 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:33 crc kubenswrapper[4758]: E0130 08:31:33.768964 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:34 crc kubenswrapper[4758]: I0130 08:31:34.079908 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.080234 4758 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.080752 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs podName:83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4 nodeName:}" failed. No retries permitted until 2026-01-30 08:32:38.08071997 +0000 UTC m=+163.053031561 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs") pod "network-metrics-daemon-gj6b4" (UID: "83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 08:31:34 crc kubenswrapper[4758]: I0130 08:31:34.768533 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.768710 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:34 crc kubenswrapper[4758]: I0130 08:31:34.768796 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.769163 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:34 crc kubenswrapper[4758]: I0130 08:31:34.769621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:34 crc kubenswrapper[4758]: E0130 08:31:34.769858 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:35 crc kubenswrapper[4758]: I0130 08:31:35.768402 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:35 crc kubenswrapper[4758]: E0130 08:31:35.769909 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:36 crc kubenswrapper[4758]: I0130 08:31:36.768172 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:36 crc kubenswrapper[4758]: I0130 08:31:36.768238 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:36 crc kubenswrapper[4758]: E0130 08:31:36.768341 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:36 crc kubenswrapper[4758]: I0130 08:31:36.768512 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:36 crc kubenswrapper[4758]: E0130 08:31:36.768685 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:36 crc kubenswrapper[4758]: E0130 08:31:36.768860 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:37 crc kubenswrapper[4758]: I0130 08:31:37.768343 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:37 crc kubenswrapper[4758]: E0130 08:31:37.768468 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:38 crc kubenswrapper[4758]: I0130 08:31:38.768083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:38 crc kubenswrapper[4758]: I0130 08:31:38.768083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:38 crc kubenswrapper[4758]: I0130 08:31:38.768252 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:38 crc kubenswrapper[4758]: E0130 08:31:38.768410 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:38 crc kubenswrapper[4758]: E0130 08:31:38.768562 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:38 crc kubenswrapper[4758]: E0130 08:31:38.768657 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:39 crc kubenswrapper[4758]: I0130 08:31:39.768414 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:39 crc kubenswrapper[4758]: E0130 08:31:39.768552 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:40 crc kubenswrapper[4758]: I0130 08:31:40.768068 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:40 crc kubenswrapper[4758]: I0130 08:31:40.768141 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:40 crc kubenswrapper[4758]: I0130 08:31:40.768237 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:40 crc kubenswrapper[4758]: E0130 08:31:40.768315 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:40 crc kubenswrapper[4758]: E0130 08:31:40.768435 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:40 crc kubenswrapper[4758]: E0130 08:31:40.768526 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:41 crc kubenswrapper[4758]: I0130 08:31:41.770338 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:41 crc kubenswrapper[4758]: E0130 08:31:41.770544 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:41 crc kubenswrapper[4758]: I0130 08:31:41.771574 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:31:41 crc kubenswrapper[4758]: E0130 08:31:41.772075 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:42 crc kubenswrapper[4758]: I0130 08:31:42.768447 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:42 crc kubenswrapper[4758]: I0130 08:31:42.768509 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:42 crc kubenswrapper[4758]: I0130 08:31:42.768467 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:42 crc kubenswrapper[4758]: E0130 08:31:42.768701 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:42 crc kubenswrapper[4758]: E0130 08:31:42.768805 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:42 crc kubenswrapper[4758]: E0130 08:31:42.768961 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:43 crc kubenswrapper[4758]: I0130 08:31:43.768147 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:43 crc kubenswrapper[4758]: E0130 08:31:43.768330 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:44 crc kubenswrapper[4758]: I0130 08:31:44.768296 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:44 crc kubenswrapper[4758]: E0130 08:31:44.768510 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:44 crc kubenswrapper[4758]: I0130 08:31:44.768539 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:44 crc kubenswrapper[4758]: E0130 08:31:44.768821 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:44 crc kubenswrapper[4758]: I0130 08:31:44.769179 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:44 crc kubenswrapper[4758]: E0130 08:31:44.769257 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:45 crc kubenswrapper[4758]: I0130 08:31:45.768358 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:45 crc kubenswrapper[4758]: E0130 08:31:45.769755 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:46 crc kubenswrapper[4758]: I0130 08:31:46.768309 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:46 crc kubenswrapper[4758]: I0130 08:31:46.768360 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:46 crc kubenswrapper[4758]: E0130 08:31:46.768423 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:46 crc kubenswrapper[4758]: I0130 08:31:46.768314 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:46 crc kubenswrapper[4758]: E0130 08:31:46.768594 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:46 crc kubenswrapper[4758]: E0130 08:31:46.768663 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:47 crc kubenswrapper[4758]: I0130 08:31:47.768066 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:47 crc kubenswrapper[4758]: E0130 08:31:47.768348 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:48 crc kubenswrapper[4758]: I0130 08:31:48.768090 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:48 crc kubenswrapper[4758]: I0130 08:31:48.768121 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:48 crc kubenswrapper[4758]: E0130 08:31:48.768195 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:48 crc kubenswrapper[4758]: I0130 08:31:48.768100 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:48 crc kubenswrapper[4758]: E0130 08:31:48.768251 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:48 crc kubenswrapper[4758]: E0130 08:31:48.768277 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:49 crc kubenswrapper[4758]: I0130 08:31:49.768543 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:49 crc kubenswrapper[4758]: E0130 08:31:49.769268 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.342955 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/1.log" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.343979 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/0.log" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.344093 4758 generic.go:334] "Generic (PLEG): container finished" podID="fac75e9c-fc94-4c83-8613-bce0f4744079" containerID="e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1" exitCode=1 Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.344137 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerDied","Data":"e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1"} Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.344187 4758 scope.go:117] "RemoveContainer" containerID="0fde4256e56e3c2971e1bdf8568f4910c436b8f4379ac190fc6f8cca245c42cb" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.345490 4758 scope.go:117] "RemoveContainer" containerID="e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1" Jan 30 08:31:50 crc kubenswrapper[4758]: E0130 08:31:50.346100 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-99ddw_openshift-multus(fac75e9c-fc94-4c83-8613-bce0f4744079)\"" pod="openshift-multus/multus-99ddw" podUID="fac75e9c-fc94-4c83-8613-bce0f4744079" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.376194 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rljx9" podStartSLOduration=95.376173744 podStartE2EDuration="1m35.376173744s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:31:33.31441611 +0000 UTC m=+98.286727751" watchObservedRunningTime="2026-01-30 08:31:50.376173744 +0000 UTC m=+115.348485315" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.768122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.768122 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:50 crc kubenswrapper[4758]: E0130 08:31:50.768656 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:50 crc kubenswrapper[4758]: E0130 08:31:50.768814 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:50 crc kubenswrapper[4758]: I0130 08:31:50.768120 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:50 crc kubenswrapper[4758]: E0130 08:31:50.769183 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:51 crc kubenswrapper[4758]: I0130 08:31:51.351998 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/1.log" Jan 30 08:31:51 crc kubenswrapper[4758]: I0130 08:31:51.768298 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:51 crc kubenswrapper[4758]: E0130 08:31:51.768471 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:52 crc kubenswrapper[4758]: I0130 08:31:52.767788 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:52 crc kubenswrapper[4758]: I0130 08:31:52.767971 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:52 crc kubenswrapper[4758]: I0130 08:31:52.768178 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:52 crc kubenswrapper[4758]: E0130 08:31:52.768169 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:52 crc kubenswrapper[4758]: E0130 08:31:52.768343 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:52 crc kubenswrapper[4758]: E0130 08:31:52.768654 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:53 crc kubenswrapper[4758]: I0130 08:31:53.768270 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:53 crc kubenswrapper[4758]: E0130 08:31:53.768738 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:53 crc kubenswrapper[4758]: I0130 08:31:53.769310 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:31:53 crc kubenswrapper[4758]: E0130 08:31:53.769435 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-d2cb9_openshift-ovn-kubernetes(a682aa56-1a48-46dd-a06c-8cbaaeea7008)\"" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" Jan 30 08:31:54 crc kubenswrapper[4758]: I0130 08:31:54.768061 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:54 crc kubenswrapper[4758]: I0130 08:31:54.768195 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:54 crc kubenswrapper[4758]: E0130 08:31:54.768237 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:54 crc kubenswrapper[4758]: I0130 08:31:54.768317 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:54 crc kubenswrapper[4758]: E0130 08:31:54.768548 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:54 crc kubenswrapper[4758]: E0130 08:31:54.768738 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:55 crc kubenswrapper[4758]: E0130 08:31:55.716303 4758 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 08:31:55 crc kubenswrapper[4758]: I0130 08:31:55.767755 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:55 crc kubenswrapper[4758]: E0130 08:31:55.769463 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:55 crc kubenswrapper[4758]: E0130 08:31:55.868194 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:31:56 crc kubenswrapper[4758]: I0130 08:31:56.767773 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:56 crc kubenswrapper[4758]: I0130 08:31:56.767923 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:56 crc kubenswrapper[4758]: E0130 08:31:56.767955 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:56 crc kubenswrapper[4758]: I0130 08:31:56.767783 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:56 crc kubenswrapper[4758]: E0130 08:31:56.768217 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:56 crc kubenswrapper[4758]: E0130 08:31:56.768305 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:57 crc kubenswrapper[4758]: I0130 08:31:57.768789 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:57 crc kubenswrapper[4758]: E0130 08:31:57.769022 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:31:58 crc kubenswrapper[4758]: I0130 08:31:58.767982 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:31:58 crc kubenswrapper[4758]: I0130 08:31:58.768090 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:31:58 crc kubenswrapper[4758]: I0130 08:31:58.768601 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:31:58 crc kubenswrapper[4758]: E0130 08:31:58.768764 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:31:58 crc kubenswrapper[4758]: E0130 08:31:58.768929 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:31:58 crc kubenswrapper[4758]: E0130 08:31:58.769204 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:31:59 crc kubenswrapper[4758]: I0130 08:31:59.768324 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:31:59 crc kubenswrapper[4758]: E0130 08:31:59.769231 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:00 crc kubenswrapper[4758]: I0130 08:32:00.767821 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:00 crc kubenswrapper[4758]: E0130 08:32:00.768006 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:00 crc kubenswrapper[4758]: I0130 08:32:00.768332 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:00 crc kubenswrapper[4758]: I0130 08:32:00.768389 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:00 crc kubenswrapper[4758]: E0130 08:32:00.768549 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:00 crc kubenswrapper[4758]: E0130 08:32:00.768692 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:00 crc kubenswrapper[4758]: E0130 08:32:00.869704 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:32:01 crc kubenswrapper[4758]: I0130 08:32:01.768217 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:01 crc kubenswrapper[4758]: E0130 08:32:01.768439 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:02 crc kubenswrapper[4758]: I0130 08:32:02.767645 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:02 crc kubenswrapper[4758]: I0130 08:32:02.767778 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:02 crc kubenswrapper[4758]: E0130 08:32:02.767851 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:02 crc kubenswrapper[4758]: I0130 08:32:02.767950 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:02 crc kubenswrapper[4758]: E0130 08:32:02.768123 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:02 crc kubenswrapper[4758]: E0130 08:32:02.768191 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:03 crc kubenswrapper[4758]: I0130 08:32:03.768287 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:03 crc kubenswrapper[4758]: E0130 08:32:03.768517 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:04 crc kubenswrapper[4758]: I0130 08:32:04.767778 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:04 crc kubenswrapper[4758]: I0130 08:32:04.767931 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:04 crc kubenswrapper[4758]: E0130 08:32:04.768075 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:04 crc kubenswrapper[4758]: I0130 08:32:04.768158 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:04 crc kubenswrapper[4758]: E0130 08:32:04.768257 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:04 crc kubenswrapper[4758]: E0130 08:32:04.768494 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:04 crc kubenswrapper[4758]: I0130 08:32:04.768951 4758 scope.go:117] "RemoveContainer" containerID="e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1" Jan 30 08:32:05 crc kubenswrapper[4758]: I0130 08:32:05.412244 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/1.log" Jan 30 08:32:05 crc kubenswrapper[4758]: I0130 08:32:05.412892 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19"} Jan 30 08:32:05 crc kubenswrapper[4758]: I0130 08:32:05.768570 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:05 crc kubenswrapper[4758]: E0130 08:32:05.769972 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:05 crc kubenswrapper[4758]: E0130 08:32:05.871016 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:32:06 crc kubenswrapper[4758]: I0130 08:32:06.767522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:06 crc kubenswrapper[4758]: E0130 08:32:06.767663 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:06 crc kubenswrapper[4758]: I0130 08:32:06.767522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:06 crc kubenswrapper[4758]: E0130 08:32:06.767756 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:06 crc kubenswrapper[4758]: I0130 08:32:06.767524 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:06 crc kubenswrapper[4758]: E0130 08:32:06.767855 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:07 crc kubenswrapper[4758]: I0130 08:32:07.767917 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:07 crc kubenswrapper[4758]: E0130 08:32:07.768132 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:08 crc kubenswrapper[4758]: I0130 08:32:08.767802 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:08 crc kubenswrapper[4758]: I0130 08:32:08.767832 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:08 crc kubenswrapper[4758]: I0130 08:32:08.767903 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:08 crc kubenswrapper[4758]: E0130 08:32:08.768028 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:08 crc kubenswrapper[4758]: E0130 08:32:08.768170 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:08 crc kubenswrapper[4758]: E0130 08:32:08.768575 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:08 crc kubenswrapper[4758]: I0130 08:32:08.768897 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.425841 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.428531 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerStarted","Data":"b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353"} Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.428912 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.455664 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podStartSLOduration=114.455644914 podStartE2EDuration="1m54.455644914s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:09.45265563 +0000 UTC m=+134.424967211" watchObservedRunningTime="2026-01-30 08:32:09.455644914 +0000 UTC m=+134.427956475" Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.725703 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gj6b4"] Jan 30 08:32:09 crc kubenswrapper[4758]: I0130 08:32:09.725868 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:09 crc kubenswrapper[4758]: E0130 08:32:09.726071 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:10 crc kubenswrapper[4758]: I0130 08:32:10.768279 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:10 crc kubenswrapper[4758]: E0130 08:32:10.768603 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:10 crc kubenswrapper[4758]: I0130 08:32:10.768298 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:10 crc kubenswrapper[4758]: I0130 08:32:10.768335 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:10 crc kubenswrapper[4758]: E0130 08:32:10.768757 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:10 crc kubenswrapper[4758]: E0130 08:32:10.768810 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:10 crc kubenswrapper[4758]: E0130 08:32:10.872655 4758 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 08:32:11 crc kubenswrapper[4758]: I0130 08:32:11.768390 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:11 crc kubenswrapper[4758]: E0130 08:32:11.768542 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:12 crc kubenswrapper[4758]: I0130 08:32:12.767697 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:12 crc kubenswrapper[4758]: I0130 08:32:12.767809 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:12 crc kubenswrapper[4758]: I0130 08:32:12.767728 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:12 crc kubenswrapper[4758]: E0130 08:32:12.767894 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:12 crc kubenswrapper[4758]: E0130 08:32:12.768123 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:12 crc kubenswrapper[4758]: E0130 08:32:12.768203 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:13 crc kubenswrapper[4758]: I0130 08:32:13.767770 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:13 crc kubenswrapper[4758]: E0130 08:32:13.767974 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:14 crc kubenswrapper[4758]: I0130 08:32:14.768479 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:14 crc kubenswrapper[4758]: E0130 08:32:14.768624 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 08:32:14 crc kubenswrapper[4758]: I0130 08:32:14.768703 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:14 crc kubenswrapper[4758]: E0130 08:32:14.768913 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 08:32:14 crc kubenswrapper[4758]: I0130 08:32:14.769274 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:14 crc kubenswrapper[4758]: E0130 08:32:14.769422 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 08:32:15 crc kubenswrapper[4758]: I0130 08:32:15.767622 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:15 crc kubenswrapper[4758]: E0130 08:32:15.769580 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gj6b4" podUID="83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.767993 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.768169 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.768384 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.772415 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.772690 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.780875 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 08:32:16 crc kubenswrapper[4758]: I0130 08:32:16.781137 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 08:32:17 crc kubenswrapper[4758]: I0130 08:32:17.767723 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:17 crc kubenswrapper[4758]: I0130 08:32:17.769948 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 08:32:17 crc kubenswrapper[4758]: I0130 08:32:17.770006 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 08:32:19 crc kubenswrapper[4758]: I0130 08:32:19.241586 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.387299 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.387358 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.670430 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.670535 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: E0130 08:32:22.670599 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:34:24.670568542 +0000 UTC m=+269.642880123 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.670704 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.671888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.678599 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.772095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.772192 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.780414 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.782553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.804853 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.827273 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 08:32:22 crc kubenswrapper[4758]: I0130 08:32:22.838968 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.477575 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bdcb25005d1c2bb73f54b7d40b3b8ae6fb360057c8ed105fd97cf1f6bbc587f9"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.477916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"932143a6e8b2a22127122c1b00ee8c2202282b86f76c6623cb309ae5f2a8cf55"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.479408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"99434f6bf1ff80dce949575876654f2681af5bc06a1b726dc50de372aac982fb"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.479491 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"df9f6c7b0c4f7528fd08a093c08256243e0150460509444ccbe86889d742ca6f"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.479764 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.480506 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1d79978374fca6fe6a57a8596223fcc9c30b1e4e525a1c5be28be07f4c3db244"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.480536 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a1f32a3da5a84270f92e09e66774efe0dade64b3ef0562aa0ce864e43a588c03"} Jan 30 08:32:23 crc kubenswrapper[4758]: I0130 08:32:23.960235 4758 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.007935 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrqb6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.008822 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.014492 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.014577 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.014595 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.018515 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.018874 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.023716 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.023791 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.023812 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.024362 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.024402 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.025021 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.026165 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.046994 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.048452 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.048955 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.049111 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.049248 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.050334 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.060228 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.060917 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q592c"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.061696 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.064693 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.065077 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.066626 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.066866 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljqw"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.067319 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.068548 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.069218 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.069383 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.070240 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.072584 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.073227 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.073974 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.078160 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.078912 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.079396 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.079439 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.079935 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.080057 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.082232 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zkrv"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.082822 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.082841 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.083498 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.084861 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zp6d8"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085245 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085812 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085836 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085880 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-etcd-client\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085918 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5551b0-f51c-43c9-bf60-e191132339fe-serving-cert\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-audit\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085954 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-image-import-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085973 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.085991 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-audit-dir\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086022 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-images\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086057 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-config\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086077 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086096 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-serving-cert\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086114 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086135 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-config\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086152 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086188 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt49h\" (UniqueName: \"kubernetes.io/projected/5b5551b0-f51c-43c9-bf60-e191132339fe-kube-api-access-lt49h\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086228 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086246 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-node-pullsecrets\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086263 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2qt\" (UniqueName: \"kubernetes.io/projected/814885ca-d12b-49a3-a788-4648517a1c23-kube-api-access-cw2qt\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086285 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrg7\" (UniqueName: \"kubernetes.io/projected/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-kube-api-access-mxrg7\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086325 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086365 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.086383 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-encryption-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.091288 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.091422 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.091487 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.091711 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.092164 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094361 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094470 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094581 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094676 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094769 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094896 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.094943 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095016 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095111 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095554 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095642 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095720 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095805 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.095925 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.096060 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.096191 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.096317 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.097474 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.097803 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.097884 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.097959 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098057 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098154 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098257 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098359 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098445 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098642 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098729 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098940 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.098980 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099030 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099097 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099142 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099208 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.099265 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.123362 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7trvn"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.159878 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.162789 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.163195 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rmc6n"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.163490 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.163738 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.164757 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.173458 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.179182 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8tqh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.179826 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.180222 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.181085 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.182368 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.184635 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.185574 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187496 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187591 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187646 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187672 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187719 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187749 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-audit-dir\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187887 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.187922 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-images\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188213 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188248 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188381 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-config\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188412 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188546 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-serving-cert\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188705 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188735 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188869 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-config\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188896 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189655 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f452c53b-893b-4060-b573-595e98576792-serving-cert\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189841 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189978 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt49h\" (UniqueName: \"kubernetes.io/projected/5b5551b0-f51c-43c9-bf60-e191132339fe-kube-api-access-lt49h\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190002 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190144 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190169 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-node-pullsecrets\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190296 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2qt\" (UniqueName: \"kubernetes.io/projected/814885ca-d12b-49a3-a788-4648517a1c23-kube-api-access-cw2qt\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190331 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190466 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190496 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190638 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190777 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190810 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvfb2\" (UniqueName: \"kubernetes.io/projected/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-kube-api-access-wvfb2\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrg7\" (UniqueName: \"kubernetes.io/projected/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-kube-api-access-mxrg7\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190994 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191127 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191151 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2db7\" (UniqueName: \"kubernetes.io/projected/e376d872-d6db-4f3b-b9f0-9fff22f7546d-kube-api-access-q2db7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191279 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191312 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191364 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-config\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191470 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-encryption-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191537 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e376d872-d6db-4f3b-b9f0-9fff22f7546d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188449 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-audit-dir\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191590 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191740 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191766 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191924 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-images\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191980 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-etcd-client\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192297 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192322 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192344 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192366 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e376d872-d6db-4f3b-b9f0-9fff22f7546d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192392 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192435 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192459 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxgp4\" (UniqueName: \"kubernetes.io/projected/f452c53b-893b-4060-b573-595e98576792-kube-api-access-hxgp4\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192489 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5551b0-f51c-43c9-bf60-e191132339fe-serving-cert\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f452c53b-893b-4060-b573-595e98576792-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192543 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-audit\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192912 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-image-import-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.194031 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-image-import-ca\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.195337 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.195864 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-audit\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.188759 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.197069 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.197152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814885ca-d12b-49a3-a788-4648517a1c23-node-pullsecrets\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.197987 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.198029 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.198640 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrqb6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.198710 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.199561 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.200060 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.200424 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.200474 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-config\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.201751 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189621 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189656 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189686 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189712 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189780 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189804 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.189833 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190214 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190245 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190271 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190294 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190322 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190346 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190371 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190396 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190428 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190455 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190481 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190509 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190547 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.206393 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190579 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190620 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190654 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190724 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.190748 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191212 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191212 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191263 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.207200 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191312 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191331 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191472 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.207466 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5551b0-f51c-43c9-bf60-e191132339fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191484 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.236569 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5551b0-f51c-43c9-bf60-e191132339fe-serving-cert\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.236992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-serving-cert\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.238275 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.245853 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.246788 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.247315 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-etcd-client\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.247833 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.249673 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.250661 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.252203 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.257287 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/814885ca-d12b-49a3-a788-4648517a1c23-encryption-config\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.257841 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.258458 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191536 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191610 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191653 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191655 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191717 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191775 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.191820 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.192413 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.193126 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.195250 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.270512 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.273001 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/814885ca-d12b-49a3-a788-4648517a1c23-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.276441 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z9hfd"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.293324 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.296062 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.310947 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312568 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312609 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f452c53b-893b-4060-b573-595e98576792-serving-cert\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312710 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312742 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312810 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312849 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvfb2\" (UniqueName: \"kubernetes.io/projected/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-kube-api-access-wvfb2\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312893 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312928 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2db7\" (UniqueName: \"kubernetes.io/projected/e376d872-d6db-4f3b-b9f0-9fff22f7546d-kube-api-access-q2db7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.312966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e376d872-d6db-4f3b-b9f0-9fff22f7546d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313001 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313060 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313110 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313184 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313227 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e376d872-d6db-4f3b-b9f0-9fff22f7546d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313262 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxgp4\" (UniqueName: \"kubernetes.io/projected/f452c53b-893b-4060-b573-595e98576792-kube-api-access-hxgp4\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f452c53b-893b-4060-b573-595e98576792-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313418 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313492 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313536 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.313609 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.314020 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.314128 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.314715 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.314754 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f452c53b-893b-4060-b573-595e98576792-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.315157 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.317054 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.317834 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.317923 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.317957 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.318225 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.318354 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.318627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.321148 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.321893 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.322614 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e376d872-d6db-4f3b-b9f0-9fff22f7546d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.323943 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.325106 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.325352 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.325569 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.328697 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.326455 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.326466 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.328855 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.328860 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.328921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.329052 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.325703 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.330120 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.330380 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.330428 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.331623 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.333952 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.334451 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.334627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.334951 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.335777 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.335791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.336138 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.336888 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.337591 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.338239 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.339026 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.339739 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.340343 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hlm86"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.340823 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.341026 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.341219 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e376d872-d6db-4f3b-b9f0-9fff22f7546d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.341681 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.342490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.345101 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.346204 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q592c"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.346722 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.347628 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.347806 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.351546 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.353325 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ggszh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.353664 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f452c53b-893b-4060-b573-595e98576792-serving-cert\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.354403 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljqw"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.354534 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.359718 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7trvn"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.364064 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.364779 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.366136 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.369948 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.371203 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.372684 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z9hfd"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.375222 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.376822 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.378783 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h2zc"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.381659 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.381799 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.382212 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kdljh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.382942 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.385177 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.387129 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zkrv"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.388758 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8tqh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.390647 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.390919 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.392893 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.394808 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.397389 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.399337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.401009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.402541 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.404147 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h2zc"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.412752 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.425117 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.427648 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zp6d8"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.429057 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.431424 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.432356 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.434369 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.436061 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.438424 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hlm86"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.438462 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.439391 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.440684 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.441842 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.442877 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-sgrqx"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.444779 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.445282 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ggszh"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.446849 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sgrqx"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.451379 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.471123 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.490935 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.510737 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.531636 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.572117 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") pod \"route-controller-manager-6576b87f9c-n7zk7\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.590274 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") pod \"controller-manager-879f6c89f-9kt48\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.609460 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2qt\" (UniqueName: \"kubernetes.io/projected/814885ca-d12b-49a3-a788-4648517a1c23-kube-api-access-cw2qt\") pod \"apiserver-76f77b778f-hrqb6\" (UID: \"814885ca-d12b-49a3-a788-4648517a1c23\") " pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.623772 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.631728 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrg7\" (UniqueName: \"kubernetes.io/projected/8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972-kube-api-access-mxrg7\") pod \"machine-api-operator-5694c8668f-q592c\" (UID: \"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.633733 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.652442 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.663366 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.671805 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.694599 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.718736 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.733292 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.734980 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.758948 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.762811 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.771237 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.791087 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.811933 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.834283 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.852626 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.871901 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.889819 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrqb6"] Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.893593 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 08:32:24 crc kubenswrapper[4758]: W0130 08:32:24.901509 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod814885ca_d12b_49a3_a788_4648517a1c23.slice/crio-cf05fefe3e528ed06d6e0104e100fd57f9d3ccdb450ae41ca199ebf454ded551 WatchSource:0}: Error finding container cf05fefe3e528ed06d6e0104e100fd57f9d3ccdb450ae41ca199ebf454ded551: Status 404 returned error can't find the container with id cf05fefe3e528ed06d6e0104e100fd57f9d3ccdb450ae41ca199ebf454ded551 Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.911828 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.952552 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt49h\" (UniqueName: \"kubernetes.io/projected/5b5551b0-f51c-43c9-bf60-e191132339fe-kube-api-access-lt49h\") pod \"authentication-operator-69f744f599-kljqw\" (UID: \"5b5551b0-f51c-43c9-bf60-e191132339fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:24 crc kubenswrapper[4758]: I0130 08:32:24.991892 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.011438 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024596 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-audit-policies\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024672 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-trusted-ca\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024708 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024722 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024747 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kngv7\" (UniqueName: \"kubernetes.io/projected/becac525-d7ec-48b6-9f52-3b7ca1606e50-kube-api-access-kngv7\") pod \"migrator-59844c95c7-57j8p\" (UID: \"becac525-d7ec-48b6-9f52-3b7ca1606e50\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024765 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024778 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024794 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxtx\" (UniqueName: \"kubernetes.io/projected/d03a1e8b-8151-4fb9-8a25-56e567566244-kube-api-access-lmxtx\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94b7\" (UniqueName: \"kubernetes.io/projected/359e47a9-5633-496e-9522-d7c522c674bf-kube-api-access-s94b7\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024843 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ggx\" (UniqueName: \"kubernetes.io/projected/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-kube-api-access-h4ggx\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024874 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-etcd-client\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024915 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024930 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755faa64-0182-4450-bd27-cb87446008d8-serving-cert\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024948 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-default-certificate\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024965 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d03a1e8b-8151-4fb9-8a25-56e567566244-metrics-tls\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.024981 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-stats-auth\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025031 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf779\" (UniqueName: \"kubernetes.io/projected/b0313d23-ff69-4957-ad3b-d6adc246aad5-kube-api-access-kf779\") pod \"downloads-7954f5f757-zp6d8\" (UID: \"b0313d23-ff69-4957-ad3b-d6adc246aad5\") " pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025062 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/359e47a9-5633-496e-9522-d7c522c674bf-audit-dir\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025094 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-service-ca-bundle\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025128 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025144 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025158 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-config\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025181 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-serving-cert\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025198 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92t4\" (UniqueName: \"kubernetes.io/projected/755faa64-0182-4450-bd27-cb87446008d8-kube-api-access-q92t4\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025212 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-encryption-config\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.025227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-metrics-certs\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.026495 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.526482102 +0000 UTC m=+150.498793653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.032315 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.052310 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.073607 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.087104 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.091391 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.113581 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127301 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.127503 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.627474471 +0000 UTC m=+150.599786022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-csi-data-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127613 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2950f4ab-791b-4190-9455-14e34e95f22d-signing-cabundle\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127698 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7f2\" (UniqueName: \"kubernetes.io/projected/25e6f6dd-4791-48ca-a614-928eb2fd6886-kube-api-access-rc7f2\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127730 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-encryption-config\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127751 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127788 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-profile-collector-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127815 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dv4\" (UniqueName: \"kubernetes.io/projected/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-kube-api-access-z6dv4\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.127839 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-config\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128294 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128335 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-audit-policies\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128359 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstw8\" (UniqueName: \"kubernetes.io/projected/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-kube-api-access-bstw8\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128415 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/102f0f42-f8c6-4e98-9e96-1659a0a62c50-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128638 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128677 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-tmpfs\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128703 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-serving-cert\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128730 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128756 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msbbw\" (UniqueName: \"kubernetes.io/projected/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-kube-api-access-msbbw\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.128898 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kngv7\" (UniqueName: \"kubernetes.io/projected/becac525-d7ec-48b6-9f52-3b7ca1606e50-kube-api-access-kngv7\") pod \"migrator-59844c95c7-57j8p\" (UID: \"becac525-d7ec-48b6-9f52-3b7ca1606e50\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129040 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-audit-policies\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129354 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129523 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7b7\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-kube-api-access-rc7b7\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d98b4c-0d9c-4a9a-af05-1def738f8293-config\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129659 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-srv-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-node-bootstrap-token\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129721 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129745 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129788 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129839 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s94b7\" (UniqueName: \"kubernetes.io/projected/359e47a9-5633-496e-9522-d7c522c674bf-kube-api-access-s94b7\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129929 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj26n\" (UniqueName: \"kubernetes.io/projected/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-kube-api-access-jj26n\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129972 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.129998 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.130042 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.130084 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.130111 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.130134 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.131360 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133052 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133136 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8kdk\" (UniqueName: \"kubernetes.io/projected/6e70d9dc-9e4c-45a3-b8d3-046067b91297-kube-api-access-m8kdk\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133185 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133210 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-metrics-tls\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133293 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755faa64-0182-4450-bd27-cb87446008d8-serving-cert\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.133374 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d03a1e8b-8151-4fb9-8a25-56e567566244-metrics-tls\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.133957 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.633936864 +0000 UTC m=+150.606248415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.134583 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-encryption-config\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.135643 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.135723 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqv4r\" (UniqueName: \"kubernetes.io/projected/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-kube-api-access-wqv4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.138881 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/755faa64-0182-4450-bd27-cb87446008d8-serving-cert\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139636 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-apiservice-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d03a1e8b-8151-4fb9-8a25-56e567566244-metrics-tls\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139768 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139822 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-webhook-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139852 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-auth-proxy-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139883 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/102f0f42-f8c6-4e98-9e96-1659a0a62c50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.139906 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7866g\" (UniqueName: \"kubernetes.io/projected/67466d94-68c1-4700-aec7-f2dd533b2fd6-kube-api-access-7866g\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140026 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-plugins-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140072 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-proxy-tls\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140109 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/359e47a9-5633-496e-9522-d7c522c674bf-audit-dir\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140186 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/359e47a9-5633-496e-9522-d7c522c674bf-audit-dir\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140199 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-certs\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140220 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whl5r\" (UniqueName: \"kubernetes.io/projected/125e1dd5-1556-4334-86d4-3c45fa9e833d-kube-api-access-whl5r\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140366 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-config\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140404 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-serving-cert\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140888 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-mountpoint-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140913 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25e6f6dd-4791-48ca-a614-928eb2fd6886-config\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140941 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q92t4\" (UniqueName: \"kubernetes.io/projected/755faa64-0182-4450-bd27-cb87446008d8-kube-api-access-q92t4\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140971 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-metrics-certs\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.140995 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125e1dd5-1556-4334-86d4-3c45fa9e833d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141020 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-srv-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141038 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141111 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a06bcc27-063e-4acc-942d-78594f88fd2c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141142 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125e1dd5-1556-4334-86d4-3c45fa9e833d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141164 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdlpm\" (UniqueName: \"kubernetes.io/projected/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-kube-api-access-jdlpm\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtqg\" (UniqueName: \"kubernetes.io/projected/99426938-7a55-4e2a-8ded-c683fe91d54d-kube-api-access-bhtqg\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-proxy-tls\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141281 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-socket-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-registration-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141333 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-trusted-ca\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141355 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb1701d-491e-479d-a12b-5af9e40e2be5-config-volume\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141379 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141404 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj54p\" (UniqueName: \"kubernetes.io/projected/2950f4ab-791b-4190-9455-14e34e95f22d-kube-api-access-mj54p\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141434 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljvs\" (UniqueName: \"kubernetes.io/projected/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-kube-api-access-wljvs\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141485 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxtx\" (UniqueName: \"kubernetes.io/projected/d03a1e8b-8151-4fb9-8a25-56e567566244-kube-api-access-lmxtx\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141493 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-config\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-client\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141561 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnr9\" (UniqueName: \"kubernetes.io/projected/2fb1701d-491e-479d-a12b-5af9e40e2be5-kube-api-access-9gnr9\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141581 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-service-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ggx\" (UniqueName: \"kubernetes.io/projected/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-kube-api-access-h4ggx\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141645 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-etcd-client\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6rk\" (UniqueName: \"kubernetes.io/projected/a06bcc27-063e-4acc-942d-78594f88fd2c-kube-api-access-5n6rk\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141718 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmn8l\" (UniqueName: \"kubernetes.io/projected/88956ba5-c91f-435b-94fa-d639c87311f3-kube-api-access-nmn8l\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141741 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-config\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141766 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-default-certificate\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141788 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvghl\" (UniqueName: \"kubernetes.io/projected/edeaef2c-0b5f-4448-a890-764774c8ff03-kube-api-access-qvghl\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141839 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141864 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2950f4ab-791b-4190-9455-14e34e95f22d-signing-key\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141886 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d98b4c-0d9c-4a9a-af05-1def738f8293-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141908 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/edeaef2c-0b5f-4448-a890-764774c8ff03-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141930 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbdrv\" (UniqueName: \"kubernetes.io/projected/24c14c8a-2e57-452b-b70b-646c1e2bac06-kube-api-access-cbdrv\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-stats-auth\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.141994 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142021 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25e6f6dd-4791-48ca-a614-928eb2fd6886-serving-cert\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142054 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf779\" (UniqueName: \"kubernetes.io/projected/b0313d23-ff69-4957-ad3b-d6adc246aad5-kube-api-access-kf779\") pod \"downloads-7954f5f757-zp6d8\" (UID: \"b0313d23-ff69-4957-ad3b-d6adc246aad5\") " pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142098 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pbl\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-kube-api-access-c5pbl\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142152 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-images\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142332 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142361 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-service-ca-bundle\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142410 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142433 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2fb1701d-491e-479d-a12b-5af9e40e2be5-metrics-tls\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142456 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-trusted-ca\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d98b4c-0d9c-4a9a-af05-1def738f8293-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142542 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67466d94-68c1-4700-aec7-f2dd533b2fd6-machine-approver-tls\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.142567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.143720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755faa64-0182-4450-bd27-cb87446008d8-trusted-ca\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.144294 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/359e47a9-5633-496e-9522-d7c522c674bf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.147581 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-serving-cert\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.148646 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-metrics-certs\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.150517 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.150894 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-stats-auth\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.153548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-service-ca-bundle\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.207881 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-default-certificate\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.208245 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/359e47a9-5633-496e-9522-d7c522c674bf-etcd-client\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.212135 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.212613 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.212827 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.220362 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.224791 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.232885 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.246845 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.247587 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.747550429 +0000 UTC m=+150.719861980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a06bcc27-063e-4acc-942d-78594f88fd2c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250600 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtqg\" (UniqueName: \"kubernetes.io/projected/99426938-7a55-4e2a-8ded-c683fe91d54d-kube-api-access-bhtqg\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-proxy-tls\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250635 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125e1dd5-1556-4334-86d4-3c45fa9e833d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250660 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdlpm\" (UniqueName: \"kubernetes.io/projected/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-kube-api-access-jdlpm\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250682 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-socket-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250696 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-registration-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250727 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb1701d-491e-479d-a12b-5af9e40e2be5-config-volume\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250750 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj54p\" (UniqueName: \"kubernetes.io/projected/2950f4ab-791b-4190-9455-14e34e95f22d-kube-api-access-mj54p\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250770 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljvs\" (UniqueName: \"kubernetes.io/projected/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-kube-api-access-wljvs\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250802 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-client\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250840 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnr9\" (UniqueName: \"kubernetes.io/projected/2fb1701d-491e-479d-a12b-5af9e40e2be5-kube-api-access-9gnr9\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-service-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250896 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6rk\" (UniqueName: \"kubernetes.io/projected/a06bcc27-063e-4acc-942d-78594f88fd2c-kube-api-access-5n6rk\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250913 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmn8l\" (UniqueName: \"kubernetes.io/projected/88956ba5-c91f-435b-94fa-d639c87311f3-kube-api-access-nmn8l\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250930 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-config\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250949 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvghl\" (UniqueName: \"kubernetes.io/projected/edeaef2c-0b5f-4448-a890-764774c8ff03-kube-api-access-qvghl\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250976 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.250993 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/edeaef2c-0b5f-4448-a890-764774c8ff03-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251012 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbdrv\" (UniqueName: \"kubernetes.io/projected/24c14c8a-2e57-452b-b70b-646c1e2bac06-kube-api-access-cbdrv\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251028 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2950f4ab-791b-4190-9455-14e34e95f22d-signing-key\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251051 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d98b4c-0d9c-4a9a-af05-1def738f8293-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251138 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251160 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25e6f6dd-4791-48ca-a614-928eb2fd6886-serving-cert\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251183 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5pbl\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-kube-api-access-c5pbl\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251200 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251218 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-images\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251242 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251260 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251275 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2fb1701d-491e-479d-a12b-5af9e40e2be5-metrics-tls\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251290 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-trusted-ca\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251305 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d98b4c-0d9c-4a9a-af05-1def738f8293-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67466d94-68c1-4700-aec7-f2dd533b2fd6-machine-approver-tls\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251360 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251379 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251394 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-csi-data-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251409 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2950f4ab-791b-4190-9455-14e34e95f22d-signing-cabundle\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251427 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7f2\" (UniqueName: \"kubernetes.io/projected/25e6f6dd-4791-48ca-a614-928eb2fd6886-kube-api-access-rc7f2\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251447 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-profile-collector-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251481 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dv4\" (UniqueName: \"kubernetes.io/projected/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-kube-api-access-z6dv4\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251497 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-config\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251516 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251533 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstw8\" (UniqueName: \"kubernetes.io/projected/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-kube-api-access-bstw8\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251569 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/102f0f42-f8c6-4e98-9e96-1659a0a62c50-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-tmpfs\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251621 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-serving-cert\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251645 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msbbw\" (UniqueName: \"kubernetes.io/projected/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-kube-api-access-msbbw\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251671 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7b7\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-kube-api-access-rc7b7\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251693 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d98b4c-0d9c-4a9a-af05-1def738f8293-config\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251715 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-srv-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251737 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-node-bootstrap-token\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251758 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251778 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251812 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251839 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj26n\" (UniqueName: \"kubernetes.io/projected/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-kube-api-access-jj26n\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251862 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251880 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251904 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251927 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251949 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251972 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8kdk\" (UniqueName: \"kubernetes.io/projected/6e70d9dc-9e4c-45a3-b8d3-046067b91297-kube-api-access-m8kdk\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.251999 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252023 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252050 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-metrics-tls\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqv4r\" (UniqueName: \"kubernetes.io/projected/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-kube-api-access-wqv4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252118 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-apiservice-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252144 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252167 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-webhook-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252188 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-auth-proxy-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/102f0f42-f8c6-4e98-9e96-1659a0a62c50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252241 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7866g\" (UniqueName: \"kubernetes.io/projected/67466d94-68c1-4700-aec7-f2dd533b2fd6-kube-api-access-7866g\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252266 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-plugins-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-proxy-tls\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252312 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-certs\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252365 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whl5r\" (UniqueName: \"kubernetes.io/projected/125e1dd5-1556-4334-86d4-3c45fa9e833d-kube-api-access-whl5r\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252406 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-mountpoint-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252428 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25e6f6dd-4791-48ca-a614-928eb2fd6886-config\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125e1dd5-1556-4334-86d4-3c45fa9e833d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252486 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-srv-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-socket-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.252887 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-registration-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.253683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.256289 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125e1dd5-1556-4334-86d4-3c45fa9e833d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.256722 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q592c"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.256758 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.256785 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a06bcc27-063e-4acc-942d-78594f88fd2c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.257092 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-csi-data-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.257537 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-config\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258202 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258530 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258645 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-proxy-tls\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-plugins-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/102f0f42-f8c6-4e98-9e96-1659a0a62c50-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.258980 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.260217 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-tmpfs\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.260366 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-images\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.260420 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/24c14c8a-2e57-452b-b70b-646c1e2bac06-mountpoint-dir\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.260921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.262579 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.762538811 +0000 UTC m=+150.734850522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.264398 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-service-ca\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.264963 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.267524 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.267650 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-metrics-tls\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.268802 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.269164 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-etcd-client\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.269190 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d98b4c-0d9c-4a9a-af05-1def738f8293-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.271879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-trusted-ca\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.272810 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-proxy-tls\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.274267 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-serving-cert\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.275613 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/102f0f42-f8c6-4e98-9e96-1659a0a62c50-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.281893 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.282945 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125e1dd5-1556-4334-86d4-3c45fa9e833d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.286777 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d98b4c-0d9c-4a9a-af05-1def738f8293-config\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.287301 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") pod \"oauth-openshift-558db77b4-ssdl5\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.313209 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.321647 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") pod \"console-f9d7485db-rxgh6\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.329702 4758 request.go:700] Waited for 1.000433144s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.352105 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.353530 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.354015 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.853989589 +0000 UTC m=+150.826301140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.355757 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvfb2\" (UniqueName: \"kubernetes.io/projected/20e5c3b1-91ce-4cbe-866b-745cb58e8c5d-kube-api-access-wvfb2\") pod \"cluster-samples-operator-665b6dd947-knqhk\" (UID: \"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.375106 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.395793 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.397817 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-srv-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.415991 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.416436 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.421819 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6e70d9dc-9e4c-45a3-b8d3-046067b91297-profile-collector-cert\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.422244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.432464 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.441598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-srv-cert\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.453585 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.455757 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.456555 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:25.956542437 +0000 UTC m=+150.928853988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.464623 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.468729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/edeaef2c-0b5f-4448-a890-764774c8ff03-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.493471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" event={"ID":"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972","Type":"ContainerStarted","Data":"e491d6716764c77f2cf3d0b73751d42e1951d7ac67b09a1074a34fd126b99ca7"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.493562 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" event={"ID":"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972","Type":"ContainerStarted","Data":"9eb218aee65f6ce0d0843d2141ba8677df46d2e8f76d79c266edaaae80aa218f"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.497899 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxgp4\" (UniqueName: \"kubernetes.io/projected/f452c53b-893b-4060-b573-595e98576792-kube-api-access-hxgp4\") pod \"openshift-config-operator-7777fb866f-hvsqq\" (UID: \"f452c53b-893b-4060-b573-595e98576792\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.510471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" event={"ID":"31897db9-9dd1-42a9-8eae-b5e13e113a3c","Type":"ContainerStarted","Data":"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.510711 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" event={"ID":"31897db9-9dd1-42a9-8eae-b5e13e113a3c","Type":"ContainerStarted","Data":"c8e9069e7ec2c171424e1bd93a4b8f9855a2158a2e3b30930d28fb757cb65678"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.511154 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.511987 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2db7\" (UniqueName: \"kubernetes.io/projected/e376d872-d6db-4f3b-b9f0-9fff22f7546d-kube-api-access-q2db7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xvdbn\" (UID: \"e376d872-d6db-4f3b-b9f0-9fff22f7546d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.512129 4758 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n7zk7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.512582 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.512482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.515316 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" event={"ID":"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a","Type":"ContainerStarted","Data":"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.515377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" event={"ID":"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a","Type":"ContainerStarted","Data":"1a4096fd26f7cd35b668bd3b74bb0a13df67390a721f7d3ce8bb74a60052f47a"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.516798 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518147 4758 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9kt48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518188 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518827 4758 generic.go:334] "Generic (PLEG): container finished" podID="814885ca-d12b-49a3-a788-4648517a1c23" containerID="06bed04177d001b476d605fe09f5c6d6e476ffb8f3710f4efae153f1def3cebd" exitCode=0 Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518859 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" event={"ID":"814885ca-d12b-49a3-a788-4648517a1c23","Type":"ContainerDied","Data":"06bed04177d001b476d605fe09f5c6d6e476ffb8f3710f4efae153f1def3cebd"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.518882 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" event={"ID":"814885ca-d12b-49a3-a788-4648517a1c23","Type":"ContainerStarted","Data":"cf05fefe3e528ed06d6e0104e100fd57f9d3ccdb450ae41ca199ebf454ded551"} Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.521981 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-apiservice-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.522018 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-webhook-cert\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.531191 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.541781 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kljqw"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.543344 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.552453 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 08:32:25 crc kubenswrapper[4758]: W0130 08:32:25.554863 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b5551b0_f51c_43c9_bf60_e191132339fe.slice/crio-85c0262ef40ab2aeca336771dd3d20ba45b615af8c5c05130272ce1cff973c24 WatchSource:0}: Error finding container 85c0262ef40ab2aeca336771dd3d20ba45b615af8c5c05130272ce1cff973c24: Status 404 returned error can't find the container with id 85c0262ef40ab2aeca336771dd3d20ba45b615af8c5c05130272ce1cff973c24 Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.559473 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.563880 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.564683 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.064665739 +0000 UTC m=+151.036977290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.566212 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.571333 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.583722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25e6f6dd-4791-48ca-a614-928eb2fd6886-serving-cert\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.587967 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.591828 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.603940 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25e6f6dd-4791-48ca-a614-928eb2fd6886-config\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.615386 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.641229 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.656069 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.656709 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.666409 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.667741 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.167722162 +0000 UTC m=+151.140033713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.674305 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.680349 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2950f4ab-791b-4190-9455-14e34e95f22d-signing-cabundle\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.698680 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.712356 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.738327 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.743342 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2950f4ab-791b-4190-9455-14e34e95f22d-signing-key\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.762439 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.768576 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.769523 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.269505156 +0000 UTC m=+151.241816697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.772473 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.798726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.801861 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.805617 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-config\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.816220 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.834488 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.851875 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.870875 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.871395 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.371372452 +0000 UTC m=+151.343684093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.873895 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.891665 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.912664 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.920243 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67466d94-68c1-4700-aec7-f2dd533b2fd6-machine-approver-tls\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.926000 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-auth-proxy-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.932005 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.936910 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67466d94-68c1-4700-aec7-f2dd533b2fd6-config\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.942908 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.959653 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.960306 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn"] Jan 30 08:32:25 crc kubenswrapper[4758]: I0130 08:32:25.972834 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:25 crc kubenswrapper[4758]: E0130 08:32:25.973704 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.473666491 +0000 UTC m=+151.445978042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: W0130 08:32:25.998443 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode376d872_d6db_4f3b_b9f0_9fff22f7546d.slice/crio-66c46e2970936047cd492a86855b7dfda0af974a1f7369abcfb92abb63186c7a WatchSource:0}: Error finding container 66c46e2970936047cd492a86855b7dfda0af974a1f7369abcfb92abb63186c7a: Status 404 returned error can't find the container with id 66c46e2970936047cd492a86855b7dfda0af974a1f7369abcfb92abb63186c7a Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:25.998617 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.000910 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.002368 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.011859 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.013665 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fb1701d-491e-479d-a12b-5af9e40e2be5-config-volume\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.013842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2fb1701d-491e-479d-a12b-5af9e40e2be5-metrics-tls\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.021377 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk"] Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.031879 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.040464 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.052582 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.072161 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.074634 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.075313 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.575298579 +0000 UTC m=+151.547610130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.079841 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.092928 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.115500 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.131939 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.135709 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.152092 4758 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 08:32:26 crc kubenswrapper[4758]: W0130 08:32:26.166442 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf190322_1e43_4ae4_ac74_78702c913801.slice/crio-773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4 WatchSource:0}: Error finding container 773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4: Status 404 returned error can't find the container with id 773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4 Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.171271 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.175827 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.176003 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.675977968 +0000 UTC m=+151.648289509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.176346 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.176825 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.676812853 +0000 UTC m=+151.649124404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.185331 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-certs\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.194219 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.212276 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.224178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/88956ba5-c91f-435b-94fa-d639c87311f3-node-bootstrap-token\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.232151 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.251503 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.259713 4758 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.259771 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert podName:99426938-7a55-4e2a-8ded-c683fe91d54d nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.759753534 +0000 UTC m=+151.732065085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert") pod "ingress-canary-sgrqx" (UID: "99426938-7a55-4e2a-8ded-c683fe91d54d") : failed to sync secret cache: timed out waiting for the condition Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.271970 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.278035 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.278305 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.778278837 +0000 UTC m=+151.750590388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.278584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.279024 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.77901663 +0000 UTC m=+151.751328181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.291064 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.350226 4758 request.go:700] Waited for 1.221017686s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.350703 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.368230 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kngv7\" (UniqueName: \"kubernetes.io/projected/becac525-d7ec-48b6-9f52-3b7ca1606e50-kube-api-access-kngv7\") pod \"migrator-59844c95c7-57j8p\" (UID: \"becac525-d7ec-48b6-9f52-3b7ca1606e50\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.380192 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.380366 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.880341829 +0000 UTC m=+151.852653380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.380885 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.381221 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.881207356 +0000 UTC m=+151.853518907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.395683 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s94b7\" (UniqueName: \"kubernetes.io/projected/359e47a9-5633-496e-9522-d7c522c674bf-kube-api-access-s94b7\") pod \"apiserver-7bbb656c7d-ct6sj\" (UID: \"359e47a9-5633-496e-9522-d7c522c674bf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.407548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.414663 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.429095 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q92t4\" (UniqueName: \"kubernetes.io/projected/755faa64-0182-4450-bd27-cb87446008d8-kube-api-access-q92t4\") pod \"console-operator-58897d9998-8zkrv\" (UID: \"755faa64-0182-4450-bd27-cb87446008d8\") " pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.445899 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ggx\" (UniqueName: \"kubernetes.io/projected/6df11515-6ad6-40b0-bd21-fc92e2eaeca6-kube-api-access-h4ggx\") pod \"router-default-5444994796-rmc6n\" (UID: \"6df11515-6ad6-40b0-bd21-fc92e2eaeca6\") " pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.451587 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.474534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxtx\" (UniqueName: \"kubernetes.io/projected/d03a1e8b-8151-4fb9-8a25-56e567566244-kube-api-access-lmxtx\") pod \"dns-operator-744455d44c-7trvn\" (UID: \"d03a1e8b-8151-4fb9-8a25-56e567566244\") " pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.480077 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.482337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.482470 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.982449872 +0000 UTC m=+151.954761423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.482841 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.483155 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:26.983139104 +0000 UTC m=+151.955450655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.486864 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf779\" (UniqueName: \"kubernetes.io/projected/b0313d23-ff69-4957-ad3b-d6adc246aad5-kube-api-access-kf779\") pod \"downloads-7954f5f757-zp6d8\" (UID: \"b0313d23-ff69-4957-ad3b-d6adc246aad5\") " pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.496465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.505967 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.509246 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdlpm\" (UniqueName: \"kubernetes.io/projected/8e5a4d53-1458-40fc-9171-b7cac79f3b8a-kube-api-access-jdlpm\") pod \"etcd-operator-b45778765-z8tqh\" (UID: \"8e5a4d53-1458-40fc-9171-b7cac79f3b8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.517203 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.572596 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.574745 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj54p\" (UniqueName: \"kubernetes.io/projected/2950f4ab-791b-4190-9455-14e34e95f22d-kube-api-access-mj54p\") pod \"service-ca-9c57cc56f-hlm86\" (UID: \"2950f4ab-791b-4190-9455-14e34e95f22d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.593643 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtqg\" (UniqueName: \"kubernetes.io/projected/99426938-7a55-4e2a-8ded-c683fe91d54d-kube-api-access-bhtqg\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.598240 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnr9\" (UniqueName: \"kubernetes.io/projected/2fb1701d-491e-479d-a12b-5af9e40e2be5-kube-api-access-9gnr9\") pod \"dns-default-ggszh\" (UID: \"2fb1701d-491e-479d-a12b-5af9e40e2be5\") " pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.609508 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" event={"ID":"8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972","Type":"ContainerStarted","Data":"bb9b8e054de931ab03f7eb90b37e3eb1db1214d52ef2bd3df6ab4f36b3e60abf"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.621788 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.622318 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.122300733 +0000 UTC m=+152.094612284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.622519 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.623143 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6rk\" (UniqueName: \"kubernetes.io/projected/a06bcc27-063e-4acc-942d-78594f88fd2c-kube-api-access-5n6rk\") pod \"multus-admission-controller-857f4d67dd-z9hfd\" (UID: \"a06bcc27-063e-4acc-942d-78594f88fd2c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.624006 4758 generic.go:334] "Generic (PLEG): container finished" podID="f452c53b-893b-4060-b573-595e98576792" containerID="4b57da8d1eee57235d82410e50918444bac3c4f78786e34aea5a3019fbed97fe" exitCode=0 Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.624096 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" event={"ID":"f452c53b-893b-4060-b573-595e98576792","Type":"ContainerDied","Data":"4b57da8d1eee57235d82410e50918444bac3c4f78786e34aea5a3019fbed97fe"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.624120 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" event={"ID":"f452c53b-893b-4060-b573-595e98576792","Type":"ContainerStarted","Data":"0768556d82c063eab914e0e7af28e9e4bb30fbf6448ca9fd896d5c86ea40dc81"} Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.626619 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.126565668 +0000 UTC m=+152.098877219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.629405 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljvs\" (UniqueName: \"kubernetes.io/projected/33e0d6c9-e5b8-478c-80f0-ccab7c303a93-kube-api-access-wljvs\") pod \"olm-operator-6b444d44fb-5zgww\" (UID: \"33e0d6c9-e5b8-478c-80f0-ccab7c303a93\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.630277 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmn8l\" (UniqueName: \"kubernetes.io/projected/88956ba5-c91f-435b-94fa-d639c87311f3-kube-api-access-nmn8l\") pod \"machine-config-server-kdljh\" (UID: \"88956ba5-c91f-435b-94fa-d639c87311f3\") " pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.630330 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" event={"ID":"814885ca-d12b-49a3-a788-4648517a1c23","Type":"ContainerStarted","Data":"041ec71157587093ffdee96d31c8608d19810d54113c1648ad1f342f54ee09a2"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.630380 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" event={"ID":"814885ca-d12b-49a3-a788-4648517a1c23","Type":"ContainerStarted","Data":"10a9862bfa8fb48ed262dee70ea4cf4783505353e609701fe0d44b8e976559ed"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.639556 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.639933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" event={"ID":"5b5551b0-f51c-43c9-bf60-e191132339fe","Type":"ContainerStarted","Data":"c20273ebf8c91d2708a07938b196b9ddd89248bb4177298b767d1e8e9a251cc6"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.639958 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" event={"ID":"5b5551b0-f51c-43c9-bf60-e191132339fe","Type":"ContainerStarted","Data":"85c0262ef40ab2aeca336771dd3d20ba45b615af8c5c05130272ce1cff973c24"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.642914 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" event={"ID":"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d","Type":"ContainerStarted","Data":"89e67cc962d4bf89057ae2a579f90a35a4e85329807e53e47f04365cd59a99df"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.642944 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" event={"ID":"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d","Type":"ContainerStarted","Data":"64f1d84a89d35df3dd5e23a02ab891ad0a03756256ffd41ced5be5722cc1bf34"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.642979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" event={"ID":"20e5c3b1-91ce-4cbe-866b-745cb58e8c5d","Type":"ContainerStarted","Data":"a4dc4d39d49eae5226959e90aa2ddd5a4dacec5f7d79a905e763a710c4671136"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.650560 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rxgh6" event={"ID":"df190322-1e43-4ae4-ac74-78702c913801","Type":"ContainerStarted","Data":"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.650598 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rxgh6" event={"ID":"df190322-1e43-4ae4-ac74-78702c913801","Type":"ContainerStarted","Data":"773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.654881 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" event={"ID":"e376d872-d6db-4f3b-b9f0-9fff22f7546d","Type":"ContainerStarted","Data":"80003f63be1d0c524f614428431ba7d52b02e3fe28fb71a2fa2b237957f2f821"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.654912 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" event={"ID":"e376d872-d6db-4f3b-b9f0-9fff22f7546d","Type":"ContainerStarted","Data":"66c46e2970936047cd492a86855b7dfda0af974a1f7369abcfb92abb63186c7a"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.660112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvghl\" (UniqueName: \"kubernetes.io/projected/edeaef2c-0b5f-4448-a890-764774c8ff03-kube-api-access-qvghl\") pod \"package-server-manager-789f6589d5-hgtxv\" (UID: \"edeaef2c-0b5f-4448-a890-764774c8ff03\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.665815 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" event={"ID":"92757cb7-5e41-4c2d-bbdf-0e4010e4611d","Type":"ContainerStarted","Data":"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.665848 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.665858 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" event={"ID":"92757cb7-5e41-4c2d-bbdf-0e4010e4611d","Type":"ContainerStarted","Data":"53b5888c06d75276ec1154090765c765987de449f501b8cfa950546c58f4dcb5"} Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.667504 4758 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9kt48 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.667539 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.686110 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.700570 4758 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ssdl5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.701439 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d98b4c-0d9c-4a9a-af05-1def738f8293-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mpftb\" (UID: \"a0d98b4c-0d9c-4a9a-af05-1def738f8293\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.700648 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.718050 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbdrv\" (UniqueName: \"kubernetes.io/projected/24c14c8a-2e57-452b-b70b-646c1e2bac06-kube-api-access-cbdrv\") pod \"csi-hostpathplugin-7h2zc\" (UID: \"24c14c8a-2e57-452b-b70b-646c1e2bac06\") " pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.718419 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.728185 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/830a2e4b-3e0d-409e-9a0b-bf503e81d5e0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h6d9r\" (UID: \"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.731157 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.747811 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.247762942 +0000 UTC m=+152.220074493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.748322 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.750802 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") pod \"collect-profiles-29496030-jndtz\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.759907 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.259866822 +0000 UTC m=+152.232178553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.770308 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.770747 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kdljh" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.781523 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dv4\" (UniqueName: \"kubernetes.io/projected/09200e03-f8f3-47a9-b11a-33fd6fcc1d1d-kube-api-access-z6dv4\") pod \"machine-config-operator-74547568cd-w9vtl\" (UID: \"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.790781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79889e0a-985e-4bcc-bdfe-2160f05a5bfe-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xts57\" (UID: \"79889e0a-985e-4bcc-bdfe-2160f05a5bfe\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.809303 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstw8\" (UniqueName: \"kubernetes.io/projected/87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426-kube-api-access-bstw8\") pod \"openshift-apiserver-operator-796bbdcf4f-vcvc6\" (UID: \"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.828091 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7866g\" (UniqueName: \"kubernetes.io/projected/67466d94-68c1-4700-aec7-f2dd533b2fd6-kube-api-access-7866g\") pod \"machine-approver-56656f9798-8qb4b\" (UID: \"67466d94-68c1-4700-aec7-f2dd533b2fd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.828970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7f2\" (UniqueName: \"kubernetes.io/projected/25e6f6dd-4791-48ca-a614-928eb2fd6886-kube-api-access-rc7f2\") pod \"service-ca-operator-777779d784-w6hpr\" (UID: \"25e6f6dd-4791-48ca-a614-928eb2fd6886\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.844790 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.866675 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.871958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.872220 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.877511 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.377478634 +0000 UTC m=+152.349790185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.877806 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.878820 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") pod \"marketplace-operator-79b997595-7mbqg\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.893821 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.898950 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.905695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99426938-7a55-4e2a-8ded-c683fe91d54d-cert\") pod \"ingress-canary-sgrqx\" (UID: \"99426938-7a55-4e2a-8ded-c683fe91d54d\") " pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.912429 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.915492 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.945531 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.957941 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5pbl\" (UniqueName: \"kubernetes.io/projected/102f0f42-f8c6-4e98-9e96-1659a0a62c50-kube-api-access-c5pbl\") pod \"cluster-image-registry-operator-dc59b4c8b-dt2c9\" (UID: \"102f0f42-f8c6-4e98-9e96-1659a0a62c50\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.962040 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whl5r\" (UniqueName: \"kubernetes.io/projected/125e1dd5-1556-4334-86d4-3c45fa9e833d-kube-api-access-whl5r\") pod \"kube-storage-version-migrator-operator-b67b599dd-6m25t\" (UID: \"125e1dd5-1556-4334-86d4-3c45fa9e833d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.962251 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.970139 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.973304 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:26 crc kubenswrapper[4758]: E0130 08:32:26.973596 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.473582568 +0000 UTC m=+152.445894119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.984222 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zkrv"] Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.988115 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqv4r\" (UniqueName: \"kubernetes.io/projected/3934d9aa-0054-4ed3-a2e8-a57dc60dad77-kube-api-access-wqv4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-hd9qf\" (UID: \"3934d9aa-0054-4ed3-a2e8-a57dc60dad77\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.995762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj26n\" (UniqueName: \"kubernetes.io/projected/7fc75b96-45f4-4639-ab9b-d95a9e3ef03c-kube-api-access-jj26n\") pod \"packageserver-d55dfcdfc-mn2qr\" (UID: \"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.996409 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" Jan 30 08:32:26 crc kubenswrapper[4758]: I0130 08:32:26.999384 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8kdk\" (UniqueName: \"kubernetes.io/projected/6e70d9dc-9e4c-45a3-b8d3-046067b91297-kube-api-access-m8kdk\") pod \"catalog-operator-68c6474976-th8pq\" (UID: \"6e70d9dc-9e4c-45a3-b8d3-046067b91297\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.004633 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.032319 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.045211 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.045489 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msbbw\" (UniqueName: \"kubernetes.io/projected/5ad239a7-b360-4ef2-ae38-cd013bd6c2e6-kube-api-access-msbbw\") pod \"machine-config-controller-84d6567774-wjbc5\" (UID: \"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.064927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7b7\" (UniqueName: \"kubernetes.io/projected/e6e6799e-f1ee-4eee-a1f6-e69a09d888af-kube-api-access-rc7b7\") pod \"ingress-operator-5b745b69d9-q6p4h\" (UID: \"e6e6799e-f1ee-4eee-a1f6-e69a09d888af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.074545 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.074988 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.574970619 +0000 UTC m=+152.547282170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.075190 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sgrqx" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.105464 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj"] Jan 30 08:32:27 crc kubenswrapper[4758]: W0130 08:32:27.112921 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88956ba5_c91f_435b_94fa_d639c87311f3.slice/crio-99795367fa6b047ecf5465284dd05075132c3face63c31527ebb7d94b5b9b895 WatchSource:0}: Error finding container 99795367fa6b047ecf5465284dd05075132c3face63c31527ebb7d94b5b9b895: Status 404 returned error can't find the container with id 99795367fa6b047ecf5465284dd05075132c3face63c31527ebb7d94b5b9b895 Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.154268 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.170426 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.176914 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.177232 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.677220867 +0000 UTC m=+152.649532418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.188339 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.204311 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.220151 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.220784 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.228988 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.257358 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.279451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.279889 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.779874617 +0000 UTC m=+152.752186168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.388460 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.389052 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.889040873 +0000 UTC m=+152.861352424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.493581 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.494124 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.99411026 +0000 UTC m=+152.966421811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.494177 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.494517 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:27.994510062 +0000 UTC m=+152.966821613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.513910 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7trvn"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.513953 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zp6d8"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.595984 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.596334 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.096319406 +0000 UTC m=+153.068630957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.698567 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.698840 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.198829862 +0000 UTC m=+153.171141413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.700041 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ggszh"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.718525 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-z8tqh"] Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.777396 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rmc6n" event={"ID":"6df11515-6ad6-40b0-bd21-fc92e2eaeca6","Type":"ContainerStarted","Data":"027cee00e588ff30f684eca9f769310f1c5539ec18cd0e62c55af4a496308162"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.777444 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rmc6n" event={"ID":"6df11515-6ad6-40b0-bd21-fc92e2eaeca6","Type":"ContainerStarted","Data":"d2838d70732b53ccb10c2f8543a7fba52b2a3ac5b91e8a08fa9ad1f30a101f82"} Jan 30 08:32:27 crc kubenswrapper[4758]: W0130 08:32:27.785986 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67466d94_68c1_4700_aec7_f2dd533b2fd6.slice/crio-089cb1412abf62b4c1a83080d4109a4301358111e990968640ace5383529ca46 WatchSource:0}: Error finding container 089cb1412abf62b4c1a83080d4109a4301358111e990968640ace5383529ca46: Status 404 returned error can't find the container with id 089cb1412abf62b4c1a83080d4109a4301358111e990968640ace5383529ca46 Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.799278 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.799695 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.299682796 +0000 UTC m=+153.271994347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: W0130 08:32:27.830739 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0313d23_ff69_4957_ad3b_d6adc246aad5.slice/crio-2730638fa8d139a5a958673467f136fd211540da1ea032cd24098ad29a8f4782 WatchSource:0}: Error finding container 2730638fa8d139a5a958673467f136fd211540da1ea032cd24098ad29a8f4782: Status 404 returned error can't find the container with id 2730638fa8d139a5a958673467f136fd211540da1ea032cd24098ad29a8f4782 Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.900189 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:27 crc kubenswrapper[4758]: E0130 08:32:27.901371 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.401359475 +0000 UTC m=+153.373671016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.911328 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" podStartSLOduration=132.911313919 podStartE2EDuration="2m12.911313919s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:27.910437761 +0000 UTC m=+152.882749312" watchObservedRunningTime="2026-01-30 08:32:27.911313919 +0000 UTC m=+152.883625470" Jan 30 08:32:27 crc kubenswrapper[4758]: W0130 08:32:27.947268 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e5a4d53_1458_40fc_9171_b7cac79f3b8a.slice/crio-67a44be726bba18c108c73051f4b9c6694088951f512f25a0c42733387283e06 WatchSource:0}: Error finding container 67a44be726bba18c108c73051f4b9c6694088951f512f25a0c42733387283e06: Status 404 returned error can't find the container with id 67a44be726bba18c108c73051f4b9c6694088951f512f25a0c42733387283e06 Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.964845 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-knqhk" podStartSLOduration=132.964831013 podStartE2EDuration="2m12.964831013s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:27.964238965 +0000 UTC m=+152.936550516" watchObservedRunningTime="2026-01-30 08:32:27.964831013 +0000 UTC m=+152.937142564" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975303 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975351 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" event={"ID":"755faa64-0182-4450-bd27-cb87446008d8","Type":"ContainerStarted","Data":"bab4ef183c1bfe066d408955a7bfd2317902f8d27cc1d2c37290f64b834a1c8d"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975382 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" event={"ID":"359e47a9-5633-496e-9522-d7c522c674bf","Type":"ContainerStarted","Data":"e085a72720025a5a5a5147e7d06c317a2722fa31e536a60b9df5b703f60e101a"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975398 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" event={"ID":"f452c53b-893b-4060-b573-595e98576792","Type":"ContainerStarted","Data":"9124a9f0dcfb742c66a7275135142d0fa2cc67d375d53d7b9b9183ad624d6aa7"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kdljh" event={"ID":"88956ba5-c91f-435b-94fa-d639c87311f3","Type":"ContainerStarted","Data":"99795367fa6b047ecf5465284dd05075132c3face63c31527ebb7d94b5b9b895"} Jan 30 08:32:27 crc kubenswrapper[4758]: I0130 08:32:27.975418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" event={"ID":"becac525-d7ec-48b6-9f52-3b7ca1606e50","Type":"ContainerStarted","Data":"34e55fb62fbaa6037b0cd2380a9181c8df85e59163d0082d143ec6c76f014828"} Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.003115 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.004592 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.504575774 +0000 UTC m=+153.476887325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.050991 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hlm86"] Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.106032 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.119543 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.619522622 +0000 UTC m=+153.591834173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.129172 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h2zc"] Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.153119 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kljqw" podStartSLOduration=133.153092708 podStartE2EDuration="2m13.153092708s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:28.151434696 +0000 UTC m=+153.123746247" watchObservedRunningTime="2026-01-30 08:32:28.153092708 +0000 UTC m=+153.125404259" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.212226 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.212569 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.712552459 +0000 UTC m=+153.684864010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.316355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.316644 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.816633414 +0000 UTC m=+153.788944955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.423599 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.423910 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:28.92389531 +0000 UTC m=+153.896206861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.519817 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.520006 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.520034 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.524735 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.524988 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.024978832 +0000 UTC m=+153.997290383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.625767 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.625942 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.125915688 +0000 UTC m=+154.098227239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.626347 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.626657 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.126650371 +0000 UTC m=+154.098961922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.679947 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" podStartSLOduration=133.679931697 podStartE2EDuration="2m13.679931697s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:28.678552234 +0000 UTC m=+153.650863775" watchObservedRunningTime="2026-01-30 08:32:28.679931697 +0000 UTC m=+153.652243248" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.731757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.732149 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.232133611 +0000 UTC m=+154.204445162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.843327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.843598 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.343587928 +0000 UTC m=+154.315899479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.864293 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6"] Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.880201 4758 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ssdl5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.880256 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.897849 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rxgh6" podStartSLOduration=133.897830575 podStartE2EDuration="2m13.897830575s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:28.896769302 +0000 UTC m=+153.869080863" watchObservedRunningTime="2026-01-30 08:32:28.897830575 +0000 UTC m=+153.870142126" Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.900512 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" event={"ID":"2950f4ab-791b-4190-9455-14e34e95f22d","Type":"ContainerStarted","Data":"65eeb97e2b5d0d3c9a9003de3b0afba3d33526702e9a55731b9f89804f91d5cd"} Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.949925 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.949985 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.449962516 +0000 UTC m=+154.422274067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.953319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:28 crc kubenswrapper[4758]: E0130 08:32:28.953780 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.453756675 +0000 UTC m=+154.426068226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:28 crc kubenswrapper[4758]: I0130 08:32:28.962733 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" event={"ID":"67466d94-68c1-4700-aec7-f2dd533b2fd6","Type":"ContainerStarted","Data":"089cb1412abf62b4c1a83080d4109a4301358111e990968640ace5383529ca46"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.007044 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" podStartSLOduration=134.007023322 podStartE2EDuration="2m14.007023322s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:28.994037293 +0000 UTC m=+153.966348854" watchObservedRunningTime="2026-01-30 08:32:29.007023322 +0000 UTC m=+153.979334873" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.009286 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.011437 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" event={"ID":"8e5a4d53-1458-40fc-9171-b7cac79f3b8a","Type":"ContainerStarted","Data":"67a44be726bba18c108c73051f4b9c6694088951f512f25a0c42733387283e06"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.035999 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" podStartSLOduration=134.035981313 podStartE2EDuration="2m14.035981313s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.035817307 +0000 UTC m=+154.008128868" watchObservedRunningTime="2026-01-30 08:32:29.035981313 +0000 UTC m=+154.008292864" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.062846 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.063190 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.563164438 +0000 UTC m=+154.535475989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.072526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.072992 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.572978717 +0000 UTC m=+154.545290268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: W0130 08:32:29.073394 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod830a2e4b_3e0d_409e_9a0b_bf503e81d5e0.slice/crio-2f3dc53d89327615e58e218d1a878046e7e1ac63b8746028d4ac3a7ee9ed7eae WatchSource:0}: Error finding container 2f3dc53d89327615e58e218d1a878046e7e1ac63b8746028d4ac3a7ee9ed7eae: Status 404 returned error can't find the container with id 2f3dc53d89327615e58e218d1a878046e7e1ac63b8746028d4ac3a7ee9ed7eae Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.073881 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" event={"ID":"755faa64-0182-4450-bd27-cb87446008d8","Type":"ContainerStarted","Data":"77ae92fedd29f08775c6b25a7d81b674b9255c4e65de2a78e169541c80f21afe"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.074911 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.077639 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zp6d8" event={"ID":"b0313d23-ff69-4957-ad3b-d6adc246aad5","Type":"ContainerStarted","Data":"2730638fa8d139a5a958673467f136fd211540da1ea032cd24098ad29a8f4782"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.078365 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.099983 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.100027 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.100100 4758 patch_prober.go:28] interesting pod/console-operator-58897d9998-8zkrv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.100176 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" podUID="755faa64-0182-4450-bd27-cb87446008d8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.170227 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.175570 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.176287 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.676260187 +0000 UTC m=+154.648571788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.187383 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xvdbn" podStartSLOduration=134.187369877 podStartE2EDuration="2m14.187369877s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.186362605 +0000 UTC m=+154.158674156" watchObservedRunningTime="2026-01-30 08:32:29.187369877 +0000 UTC m=+154.159681428" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.187979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" event={"ID":"becac525-d7ec-48b6-9f52-3b7ca1606e50","Type":"ContainerStarted","Data":"52039d01ec41c3fcd8336f14b78c18cb1473517434254dc585712312c0cdd85b"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.219057 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kdljh" event={"ID":"88956ba5-c91f-435b-94fa-d639c87311f3","Type":"ContainerStarted","Data":"7c10964b7c51d2fa7c5b8a770655e27606a8f6bc504fd63be02688a95bb450b2"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.221897 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"9a82ed84260fe27d9b5c28dfc49599735d9ee49b89213c146cfb4c0e717cde1b"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.296744 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.298444 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.798432452 +0000 UTC m=+154.770744003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.312016 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.319195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ggszh" event={"ID":"2fb1701d-491e-479d-a12b-5af9e40e2be5","Type":"ContainerStarted","Data":"b960a62c68b51fea3331be373da3df05c882404601ee0fe2885cdfb86e0fffcf"} Jan 30 08:32:29 crc kubenswrapper[4758]: W0130 08:32:29.388129 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fc75b96_45f4_4639_ab9b_d95a9e3ef03c.slice/crio-6ec9ae04020b17c09c6dcab68264d83460bb284d6b13f90502de569f915c7583 WatchSource:0}: Error finding container 6ec9ae04020b17c09c6dcab68264d83460bb284d6b13f90502de569f915c7583: Status 404 returned error can't find the container with id 6ec9ae04020b17c09c6dcab68264d83460bb284d6b13f90502de569f915c7583 Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.392941 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.401549 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.402186 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:29.902169596 +0000 UTC m=+154.874481157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.423118 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" event={"ID":"d03a1e8b-8151-4fb9-8a25-56e567566244","Type":"ContainerStarted","Data":"5369612da68c4f4da60ef852fd60082f5798de176202520b18035a432650b161"} Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.440707 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-q592c" podStartSLOduration=134.440682899 podStartE2EDuration="2m14.440682899s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.387962419 +0000 UTC m=+154.360273970" watchObservedRunningTime="2026-01-30 08:32:29.440682899 +0000 UTC m=+154.412994450" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.460209 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.479490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z9hfd"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.510712 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.514027 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.014013956 +0000 UTC m=+154.986325507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.545415 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:29 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:29 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:29 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.545463 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.615895 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.616354 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.116338127 +0000 UTC m=+155.088649678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.633207 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.633250 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.686567 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kdljh" podStartSLOduration=5.686553197 podStartE2EDuration="5.686553197s" podCreationTimestamp="2026-01-30 08:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.686123163 +0000 UTC m=+154.658434724" watchObservedRunningTime="2026-01-30 08:32:29.686553197 +0000 UTC m=+154.658864748" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.691947 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sgrqx"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.716907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.718928 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.218917745 +0000 UTC m=+155.191229296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.817511 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.817952 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.317937951 +0000 UTC m=+155.290249502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.886017 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rmc6n" podStartSLOduration=134.886002384 podStartE2EDuration="2m14.886002384s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:29.76259307 +0000 UTC m=+154.734904621" watchObservedRunningTime="2026-01-30 08:32:29.886002384 +0000 UTC m=+154.858313955" Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.889329 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.907110 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl"] Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.922008 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:29 crc kubenswrapper[4758]: E0130 08:32:29.922319 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.422307856 +0000 UTC m=+155.394619407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.922423 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57"] Jan 30 08:32:29 crc kubenswrapper[4758]: W0130 08:32:29.951612 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9313ed67_0218_4d32_adf7_710ba67de622.slice/crio-b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a WatchSource:0}: Error finding container b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a: Status 404 returned error can't find the container with id b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a Jan 30 08:32:29 crc kubenswrapper[4758]: I0130 08:32:29.973182 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5"] Jan 30 08:32:30 crc kubenswrapper[4758]: W0130 08:32:30.007285 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79889e0a_985e_4bcc_bdfe_2160f05a5bfe.slice/crio-8c8ec759ce0dce069d837078b460f7c784d5ea5edc535ab8b2efbb548ded61df WatchSource:0}: Error finding container 8c8ec759ce0dce069d837078b460f7c784d5ea5edc535ab8b2efbb548ded61df: Status 404 returned error can't find the container with id 8c8ec759ce0dce069d837078b460f7c784d5ea5edc535ab8b2efbb548ded61df Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.026726 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.027134 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.527119704 +0000 UTC m=+155.499431245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.031831 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" podStartSLOduration=135.031817171 podStartE2EDuration="2m15.031817171s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.029986774 +0000 UTC m=+155.002298325" watchObservedRunningTime="2026-01-30 08:32:30.031817171 +0000 UTC m=+155.004128722" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.150375 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.150698 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.650686293 +0000 UTC m=+155.622997844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.175759 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.235973 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.239878 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.239973 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.251026 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.251307 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.751287699 +0000 UTC m=+155.723599250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.252903 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.253831 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.272053 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h"] Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.322139 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zp6d8" podStartSLOduration=135.322122578 podStartE2EDuration="2m15.322122578s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.320093405 +0000 UTC m=+155.292404966" watchObservedRunningTime="2026-01-30 08:32:30.322122578 +0000 UTC m=+155.294434129" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.359902 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.360246 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.860234647 +0000 UTC m=+155.832546198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: W0130 08:32:30.365765 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e70d9dc_9e4c_45a3_b8d3_046067b91297.slice/crio-f457221aca8950e5c3c7220712543258375d6644522f27fdf5d270217a8137dc WatchSource:0}: Error finding container f457221aca8950e5c3c7220712543258375d6644522f27fdf5d270217a8137dc: Status 404 returned error can't find the container with id f457221aca8950e5c3c7220712543258375d6644522f27fdf5d270217a8137dc Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.414647 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" podStartSLOduration=135.414627169 podStartE2EDuration="2m15.414627169s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.411596884 +0000 UTC m=+155.383908445" watchObservedRunningTime="2026-01-30 08:32:30.414627169 +0000 UTC m=+155.386938720" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.460558 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" event={"ID":"33e0d6c9-e5b8-478c-80f0-ccab7c303a93","Type":"ContainerStarted","Data":"effa74be220146754f2c2e16c4e4f6dfc5cc70b318cba09cb2110b4367512d45"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.461175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" event={"ID":"33e0d6c9-e5b8-478c-80f0-ccab7c303a93","Type":"ContainerStarted","Data":"252a35670206bec95d60195729d035684c6c8df81f48cc65dca52e863e8046c3"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.461956 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.464805 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.464922 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.964905122 +0000 UTC m=+155.937216663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.465270 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.465460 4758 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5zgww container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.466149 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" podUID="33e0d6c9-e5b8-478c-80f0-ccab7c303a93" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.467291 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:30.967281016 +0000 UTC m=+155.939592567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.472913 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" event={"ID":"d03a1e8b-8151-4fb9-8a25-56e567566244","Type":"ContainerStarted","Data":"a7a8d3d43d0f856c1f30a0258b234c1a8a4f26c427e7e8572383e79154da54bf"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.486929 4758 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hrqb6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]log ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]etcd ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/max-in-flight-filter ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 08:32:30 crc kubenswrapper[4758]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 08:32:30 crc kubenswrapper[4758]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/openshift.io-startinformers ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 08:32:30 crc kubenswrapper[4758]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 08:32:30 crc kubenswrapper[4758]: livez check failed Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.486974 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" podUID="814885ca-d12b-49a3-a788-4648517a1c23" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.507775 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" event={"ID":"e6e6799e-f1ee-4eee-a1f6-e69a09d888af","Type":"ContainerStarted","Data":"58c8f78fa7843499a864cc836820fe2d2f5ab67c5729acfa3cb79814b0043da2"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.514181 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zp6d8" event={"ID":"b0313d23-ff69-4957-ad3b-d6adc246aad5","Type":"ContainerStarted","Data":"c1f552c2b93e9dfdfc6f667efdd223eb36ab22432f177677dd9d6206b07dd723"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.515096 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.515128 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.535014 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:30 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:30 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:30 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.535071 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.545299 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" event={"ID":"becac525-d7ec-48b6-9f52-3b7ca1606e50","Type":"ContainerStarted","Data":"58f5148a773e1bc4a8bd30edf666d5c87aa78d88431bd0faeeacaec7c8358280"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.560059 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ggszh" event={"ID":"2fb1701d-491e-479d-a12b-5af9e40e2be5","Type":"ContainerStarted","Data":"30c99a223e21f21e5df2ed1ea4b42c8ef4fddc223a105b29b7195b5c4302218f"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.567617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.567896 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.067871142 +0000 UTC m=+156.040182693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.568004 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.568233 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.068224633 +0000 UTC m=+156.040536184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.585420 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" event={"ID":"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d","Type":"ContainerStarted","Data":"170de2f1ee8573ed577993a52512dda919e22b4cb9983026b86fbb94a390fe29"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.591489 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sgrqx" event={"ID":"99426938-7a55-4e2a-8ded-c683fe91d54d","Type":"ContainerStarted","Data":"cbb4c0871940303068cc4f60338e0c2880bcebcc879d1152607fdb20b06b3330"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.633695 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" podStartSLOduration=135.633677913 podStartE2EDuration="2m15.633677913s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.578349121 +0000 UTC m=+155.550660662" watchObservedRunningTime="2026-01-30 08:32:30.633677913 +0000 UTC m=+155.605989464" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.634454 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-57j8p" podStartSLOduration=135.634444537 podStartE2EDuration="2m15.634444537s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.626582039 +0000 UTC m=+155.598893590" watchObservedRunningTime="2026-01-30 08:32:30.634444537 +0000 UTC m=+155.606756088" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.637961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" event={"ID":"125e1dd5-1556-4334-86d4-3c45fa9e833d","Type":"ContainerStarted","Data":"5cb440fadcf919f6d9e6f6fde00a6a783c12aa736ba0426159f4f58bf9f5e110"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.662718 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" event={"ID":"edeaef2c-0b5f-4448-a890-764774c8ff03","Type":"ContainerStarted","Data":"b7f69df2279f6cc4b85befdef187b0e2b509848da0e3183d7fb2f1b2489614aa"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.669766 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.671152 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.171112391 +0000 UTC m=+156.143423952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.689161 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-sgrqx" podStartSLOduration=6.689142488 podStartE2EDuration="6.689142488s" podCreationTimestamp="2026-01-30 08:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.68316993 +0000 UTC m=+155.655481481" watchObservedRunningTime="2026-01-30 08:32:30.689142488 +0000 UTC m=+155.661454039" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.709134 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" event={"ID":"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c","Type":"ContainerStarted","Data":"ffb191afec5e60cd855337ae357f1de80f68393da4a5b9975604b7f0718bb8ff"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.710527 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.710635 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" event={"ID":"7fc75b96-45f4-4639-ab9b-d95a9e3ef03c","Type":"ContainerStarted","Data":"6ec9ae04020b17c09c6dcab68264d83460bb284d6b13f90502de569f915c7583"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.743701 4758 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mn2qr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.743964 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" podUID="7fc75b96-45f4-4639-ab9b-d95a9e3ef03c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.744264 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" event={"ID":"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6","Type":"ContainerStarted","Data":"842c29b8c2a6c9ba3918b428b3bbadd1c21cabb34d92622368e828cc29436219"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.753989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" event={"ID":"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426","Type":"ContainerStarted","Data":"c994373a7f6a806a97afefe9b99eb54de8568f3c633d97ac26c7a9f4dd767d8a"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.754244 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" event={"ID":"87b5cc78-ab0f-4bdb-ac37-9d6f0bf89426","Type":"ContainerStarted","Data":"ccad1cf51c4539b41486b84adc37adb3b716e219daf9df2bb5867e3eb8d2394d"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.772991 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.774830 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.274818724 +0000 UTC m=+156.247130265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.775313 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" podStartSLOduration=135.775294089 podStartE2EDuration="2m15.775294089s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.766860154 +0000 UTC m=+155.739171705" watchObservedRunningTime="2026-01-30 08:32:30.775294089 +0000 UTC m=+155.747605640" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.782911 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" event={"ID":"2950f4ab-791b-4190-9455-14e34e95f22d","Type":"ContainerStarted","Data":"a2b3795b0982b7ccd4158d3964622ced201c6d345378e7a4d6d90b9046af05fb"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.791226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" event={"ID":"6e70d9dc-9e4c-45a3-b8d3-046067b91297","Type":"ContainerStarted","Data":"f457221aca8950e5c3c7220712543258375d6644522f27fdf5d270217a8137dc"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.841740 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcvc6" podStartSLOduration=135.84172473 podStartE2EDuration="2m15.84172473s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.825570752 +0000 UTC m=+155.797882303" watchObservedRunningTime="2026-01-30 08:32:30.84172473 +0000 UTC m=+155.814036281" Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.878427 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.879026 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.379007493 +0000 UTC m=+156.351319044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.921989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" event={"ID":"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0","Type":"ContainerStarted","Data":"2f3dc53d89327615e58e218d1a878046e7e1ac63b8746028d4ac3a7ee9ed7eae"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.940825 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" event={"ID":"a0d98b4c-0d9c-4a9a-af05-1def738f8293","Type":"ContainerStarted","Data":"57570cc3af6b3d405efbc7e2a23883cd7eda165d89dbaac8c596511a65483a2b"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.942861 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" event={"ID":"67466d94-68c1-4700-aec7-f2dd533b2fd6","Type":"ContainerStarted","Data":"d5b7569e27f5ba49ad012572cbbd873cabcd83a8b61675d946ee477f6f86ca74"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.943698 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" event={"ID":"9313ed67-0218-4d32-adf7-710ba67de622","Type":"ContainerStarted","Data":"b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a"} Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.980213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:30 crc kubenswrapper[4758]: E0130 08:32:30.981910 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.481895841 +0000 UTC m=+156.454207402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:30 crc kubenswrapper[4758]: I0130 08:32:30.983676 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hlm86" podStartSLOduration=135.983663866 podStartE2EDuration="2m15.983663866s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.912699664 +0000 UTC m=+155.885011215" watchObservedRunningTime="2026-01-30 08:32:30.983663866 +0000 UTC m=+155.955975417" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.005529 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" event={"ID":"102f0f42-f8c6-4e98-9e96-1659a0a62c50","Type":"ContainerStarted","Data":"40b4e95db369555595f1d5a15ac2210050253e33dfdb6de4c72251cb78b25d65"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.034455 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" event={"ID":"8e5a4d53-1458-40fc-9171-b7cac79f3b8a","Type":"ContainerStarted","Data":"1da427560a656aeb1dfdd7493b10ca62097f3619d9f72f044254b16635972ca4"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.076400 4758 generic.go:334] "Generic (PLEG): container finished" podID="359e47a9-5633-496e-9522-d7c522c674bf" containerID="ad4eccad36bcdea74c074cd6aae977a69767b0d379d8c00133cc5aee261fad95" exitCode=0 Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.077578 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" event={"ID":"359e47a9-5633-496e-9522-d7c522c674bf","Type":"ContainerDied","Data":"ad4eccad36bcdea74c074cd6aae977a69767b0d379d8c00133cc5aee261fad95"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.081653 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" podStartSLOduration=136.081524547 podStartE2EDuration="2m16.081524547s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:30.991657689 +0000 UTC m=+155.963969260" watchObservedRunningTime="2026-01-30 08:32:31.081524547 +0000 UTC m=+156.053836098" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.082614 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-z8tqh" podStartSLOduration=136.08260766 podStartE2EDuration="2m16.08260766s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:31.078567794 +0000 UTC m=+156.050879355" watchObservedRunningTime="2026-01-30 08:32:31.08260766 +0000 UTC m=+156.054919211" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.082936 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.084067 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.584030466 +0000 UTC m=+156.556342017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.154527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" event={"ID":"a06bcc27-063e-4acc-942d-78594f88fd2c","Type":"ContainerStarted","Data":"06020561e5deb502ae5c7cf36ab050546475bd4c7c499a3d34bc9b22ec452b2f"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.165699 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" event={"ID":"3934d9aa-0054-4ed3-a2e8-a57dc60dad77","Type":"ContainerStarted","Data":"4c5f19166eb64ee3da4d2e45ebea96ee6f292f6ea164dfa9965df9b0c9d79d2d"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.181827 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" event={"ID":"25e6f6dd-4791-48ca-a614-928eb2fd6886","Type":"ContainerStarted","Data":"b7901fea84f678d8e3dba057af8b1d3d85c65dfbabfd6582ec3e254faa0116c8"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.189636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.199503 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.699484548 +0000 UTC m=+156.671796099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.216402 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" event={"ID":"79889e0a-985e-4bcc-bdfe-2160f05a5bfe","Type":"ContainerStarted","Data":"8c8ec759ce0dce069d837078b460f7c784d5ea5edc535ab8b2efbb548ded61df"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.232133 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerStarted","Data":"4118893c53a7f8a706c32c2a7e6e6021db4c895afd2c8cfa46828333ae413f1b"} Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.290860 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.291014 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.790988909 +0000 UTC m=+156.763300460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.291492 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.292520 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.792508246 +0000 UTC m=+156.764819797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.393082 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.393470 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.893454073 +0000 UTC m=+156.865765614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.495113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.495444 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:31.995433663 +0000 UTC m=+156.967745214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.523238 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:31 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:31 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:31 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.523275 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.574577 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.596380 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.597959 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.097943858 +0000 UTC m=+157.070255409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.698372 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.698776 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.198761671 +0000 UTC m=+157.171073222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.799892 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.800676 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.300654538 +0000 UTC m=+157.272966089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:31 crc kubenswrapper[4758]: I0130 08:32:31.907678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:31 crc kubenswrapper[4758]: E0130 08:32:31.907953 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.407942614 +0000 UTC m=+157.380254165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.008689 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.008928 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.50886558 +0000 UTC m=+157.481177131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.009133 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.009467 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.509455199 +0000 UTC m=+157.481766750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.110808 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.111328 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.611307144 +0000 UTC m=+157.583618695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.212100 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.212397 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.712386235 +0000 UTC m=+157.684697786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.232852 4758 patch_prober.go:28] interesting pod/console-operator-58897d9998-8zkrv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.232966 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" podUID="755faa64-0182-4450-bd27-cb87446008d8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.242614 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" event={"ID":"359e47a9-5633-496e-9522-d7c522c674bf","Type":"ContainerStarted","Data":"6aba5664cb09159759cd5d8841f99ea17c721b3611a880f871532b6c6ebe3b5e"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.248093 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" event={"ID":"125e1dd5-1556-4334-86d4-3c45fa9e833d","Type":"ContainerStarted","Data":"7a8b363ad5ba1b0c008d5e6cad6193282a304f4608b2f2a252bb6bbf22e0c05f"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.249797 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" event={"ID":"e6e6799e-f1ee-4eee-a1f6-e69a09d888af","Type":"ContainerStarted","Data":"5a5a369ae01206b2d34799be316b979b0dd43d291661e70240ebd17ae9b074c6"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.253413 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerStarted","Data":"00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.253729 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.255428 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" event={"ID":"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6","Type":"ContainerStarted","Data":"0453f5c082f2805fe108f737e7128303c3c9f6220d9d6351cc049efe3024c7c4"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.255486 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" event={"ID":"5ad239a7-b360-4ef2-ae38-cd013bd6c2e6","Type":"ContainerStarted","Data":"cc75227496b8c85356982be1fc89439666a80a5129668763b18a227a1b00a83d"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.255611 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7mbqg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.255671 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.262251 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" event={"ID":"a06bcc27-063e-4acc-942d-78594f88fd2c","Type":"ContainerStarted","Data":"bbd2dbeed79fc8e831ae3bfcb2b13e43f2d3dae0048722fd165a3e1601f13b53"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.262497 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" event={"ID":"a06bcc27-063e-4acc-942d-78594f88fd2c","Type":"ContainerStarted","Data":"7ed63e3c4561753b2244e5f9b388437827771e38d4343b2d7250ccab1dd0d25e"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.275539 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" podStartSLOduration=137.275506181 podStartE2EDuration="2m17.275506181s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.27418956 +0000 UTC m=+157.246501111" watchObservedRunningTime="2026-01-30 08:32:32.275506181 +0000 UTC m=+157.247817722" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.275689 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" event={"ID":"6e70d9dc-9e4c-45a3-b8d3-046067b91297","Type":"ContainerStarted","Data":"93e0ab666029780fad5931ac542e70fd9dfecb9bf8d18d35d3bf794c65d34786"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.277546 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.277582 4758 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-th8pq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.277825 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" podUID="6e70d9dc-9e4c-45a3-b8d3-046067b91297" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.288087 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h6d9r" event={"ID":"830a2e4b-3e0d-409e-9a0b-bf503e81d5e0","Type":"ContainerStarted","Data":"6016ffe9e47e855f6cf2cedda9df7e72d0996ed1ecefb2107352419a6f6fab21"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.322794 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.325724 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z9hfd" podStartSLOduration=137.325706141 podStartE2EDuration="2m17.325706141s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.300478097 +0000 UTC m=+157.272789648" watchObservedRunningTime="2026-01-30 08:32:32.325706141 +0000 UTC m=+157.298017692" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.329715 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.829693147 +0000 UTC m=+157.802004698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.323027 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ggszh" event={"ID":"2fb1701d-491e-479d-a12b-5af9e40e2be5","Type":"ContainerStarted","Data":"d41cd86d54048a93d1d7820887e6dc1684ec4ea5df97a11b60c1915b01803ccd"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.337461 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.352447 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" event={"ID":"67466d94-68c1-4700-aec7-f2dd533b2fd6","Type":"ContainerStarted","Data":"b87b3c683082f67bae835a5cf24a8994a7a91bf697c96b217474d6a769ccbb1b"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.354852 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" podStartSLOduration=137.354840668 podStartE2EDuration="2m17.354840668s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.329444699 +0000 UTC m=+157.301756260" watchObservedRunningTime="2026-01-30 08:32:32.354840668 +0000 UTC m=+157.327152219" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.381852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" event={"ID":"9313ed67-0218-4d32-adf7-710ba67de622","Type":"ContainerStarted","Data":"166bda0445ccb2dd7e9715331b62d1e0995dd20e40dce82e327a772dd4e0caf9"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.384939 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" event={"ID":"102f0f42-f8c6-4e98-9e96-1659a0a62c50","Type":"ContainerStarted","Data":"d0aca64fbc3bfe7bbe5c39309610ec0ccb266abea7a26fb6fd1c5e964de159a9"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.389585 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" event={"ID":"d03a1e8b-8151-4fb9-8a25-56e567566244","Type":"ContainerStarted","Data":"c554ccfa8bb9db09f2eb5f749b13985fb4cdb27d39068e0027ba26e865e3c345"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.395565 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" event={"ID":"3934d9aa-0054-4ed3-a2e8-a57dc60dad77","Type":"ContainerStarted","Data":"8862f29e3a46d9917505d86ad0c5bd0140ddd010b0dca5c169a654101c5babf0"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.415920 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6m25t" podStartSLOduration=137.415875429 podStartE2EDuration="2m17.415875429s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.402020333 +0000 UTC m=+157.374331894" watchObservedRunningTime="2026-01-30 08:32:32.415875429 +0000 UTC m=+157.388186990" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.416724 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wjbc5" podStartSLOduration=137.416718046 podStartE2EDuration="2m17.416718046s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.359477544 +0000 UTC m=+157.331789095" watchObservedRunningTime="2026-01-30 08:32:32.416718046 +0000 UTC m=+157.389029597" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.418340 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"16a432d54676052b9d25233e837560dc1a1609ad9c6b03b13f1da7c2a5d5db5b"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.427389 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sgrqx" event={"ID":"99426938-7a55-4e2a-8ded-c683fe91d54d","Type":"ContainerStarted","Data":"3f5d9934ca5501836036e6fe57ea1ae093259aca2a0d45b91fcb66120bce9784"} Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.444777 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:32.944745928 +0000 UTC m=+157.917057479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.456953 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ggszh" podStartSLOduration=8.456934321 podStartE2EDuration="8.456934321s" podCreationTimestamp="2026-01-30 08:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.453115131 +0000 UTC m=+157.425426692" watchObservedRunningTime="2026-01-30 08:32:32.456934321 +0000 UTC m=+157.429245862" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.442634 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.465900 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" event={"ID":"a0d98b4c-0d9c-4a9a-af05-1def738f8293","Type":"ContainerStarted","Data":"ae960977790be11ea097f997b4630865b4381fae1429d9f6f19c327a6f8c5e1d"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.482571 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" event={"ID":"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d","Type":"ContainerStarted","Data":"3ab0fe430a0ae3beefa28df4853e4e87f67e12b274152fc3cd02f29abfca8fdc"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.482842 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" event={"ID":"09200e03-f8f3-47a9-b11a-33fd6fcc1d1d","Type":"ContainerStarted","Data":"1e112dd7f5bdc65162e7f5f8f1c8461f4827620ea6af4f19926fc082e5b43d6f"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.491642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" event={"ID":"25e6f6dd-4791-48ca-a614-928eb2fd6886","Type":"ContainerStarted","Data":"e4533caa89600f686f8b8228b28367ada54e64482732c4e1f1b5a013fe03ec66"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.511744 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" event={"ID":"79889e0a-985e-4bcc-bdfe-2160f05a5bfe","Type":"ContainerStarted","Data":"a190aec5a59d2c6f066a4d09df352ffcbb867d0c1ad56c2ccb6bd8a1f97a8d0d"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.520490 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" podStartSLOduration=137.520460601 podStartE2EDuration="2m17.520460601s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.519830081 +0000 UTC m=+157.492141632" watchObservedRunningTime="2026-01-30 08:32:32.520460601 +0000 UTC m=+157.492772152" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.521992 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:32 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:32 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:32 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.522230 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.532583 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" event={"ID":"edeaef2c-0b5f-4448-a890-764774c8ff03","Type":"ContainerStarted","Data":"abfdaff97c89a9d31d5044850377bdff49bc11e6a6ff91ddc141bbf554d230a1"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.533864 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" event={"ID":"edeaef2c-0b5f-4448-a890-764774c8ff03","Type":"ContainerStarted","Data":"43602cbe1f45f39ae6b7eb5319d51c475098db3ab2450d8eab0a4797dcbc93ed"} Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.533983 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.534551 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.534629 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.552357 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5zgww" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.562382 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.564322 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.064282649 +0000 UTC m=+158.036594200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.608728 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" podStartSLOduration=137.608698737 podStartE2EDuration="2m17.608698737s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.606081555 +0000 UTC m=+157.578393116" watchObservedRunningTime="2026-01-30 08:32:32.608698737 +0000 UTC m=+157.581010278" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.609562 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8qb4b" podStartSLOduration=137.609557294 podStartE2EDuration="2m17.609557294s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.569330808 +0000 UTC m=+157.541642379" watchObservedRunningTime="2026-01-30 08:32:32.609557294 +0000 UTC m=+157.581868845" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.656625 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9vtl" podStartSLOduration=137.656598735 podStartE2EDuration="2m17.656598735s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.653517758 +0000 UTC m=+157.625829319" watchObservedRunningTime="2026-01-30 08:32:32.656598735 +0000 UTC m=+157.628910286" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.664359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.665947 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.165927199 +0000 UTC m=+158.138238750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.682061 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xts57" podStartSLOduration=137.682020995 podStartE2EDuration="2m17.682020995s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.67962549 +0000 UTC m=+157.651937051" watchObservedRunningTime="2026-01-30 08:32:32.682020995 +0000 UTC m=+157.654332546" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.758610 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8zkrv" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.768973 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.769524 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.269497018 +0000 UTC m=+158.241808569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.773155 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7trvn" podStartSLOduration=137.773137332 podStartE2EDuration="2m17.773137332s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.71873794 +0000 UTC m=+157.691049501" watchObservedRunningTime="2026-01-30 08:32:32.773137332 +0000 UTC m=+157.745448883" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.847403 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mpftb" podStartSLOduration=137.847388799 podStartE2EDuration="2m17.847388799s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.773624747 +0000 UTC m=+157.745936308" watchObservedRunningTime="2026-01-30 08:32:32.847388799 +0000 UTC m=+157.819700350" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.870834 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.871224 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.371212608 +0000 UTC m=+158.343524159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.906352 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w6hpr" podStartSLOduration=137.906332344 podStartE2EDuration="2m17.906332344s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.874420079 +0000 UTC m=+157.846731640" watchObservedRunningTime="2026-01-30 08:32:32.906332344 +0000 UTC m=+157.878643915" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.907157 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hd9qf" podStartSLOduration=137.907150609 podStartE2EDuration="2m17.907150609s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.90397536 +0000 UTC m=+157.876286911" watchObservedRunningTime="2026-01-30 08:32:32.907150609 +0000 UTC m=+157.879462160" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.962702 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt2c9" podStartSLOduration=137.962687288 podStartE2EDuration="2m17.962687288s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.960853629 +0000 UTC m=+157.933165200" watchObservedRunningTime="2026-01-30 08:32:32.962687288 +0000 UTC m=+157.934998839" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.963441 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" podStartSLOduration=137.96343535 podStartE2EDuration="2m17.96343535s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:32.940382236 +0000 UTC m=+157.912693787" watchObservedRunningTime="2026-01-30 08:32:32.96343535 +0000 UTC m=+157.935746901" Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.972131 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.972337 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.472295919 +0000 UTC m=+158.444607470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:32 crc kubenswrapper[4758]: I0130 08:32:32.972432 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:32 crc kubenswrapper[4758]: E0130 08:32:32.972738 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.472726503 +0000 UTC m=+158.445038054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.074069 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.074228 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.574210017 +0000 UTC m=+158.546521578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.074276 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.074603 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.574592049 +0000 UTC m=+158.546903600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.175434 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.175619 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.675593808 +0000 UTC m=+158.647905359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.175728 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.176009 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.67599292 +0000 UTC m=+158.648304481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.277443 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.277620 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.777587877 +0000 UTC m=+158.749899428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.277754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.278028 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.778015831 +0000 UTC m=+158.750327382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.378876 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.379266 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.879239246 +0000 UTC m=+158.851550797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.480828 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.481241 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:33.981225755 +0000 UTC m=+158.953537306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.518440 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mn2qr" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.524219 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:33 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:33 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:33 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.524289 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.539652 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" event={"ID":"e6e6799e-f1ee-4eee-a1f6-e69a09d888af","Type":"ContainerStarted","Data":"78b0a0c8a3c021adc0df24768615664a4efc2db62fe5e9e206938b79b522ada8"} Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.541392 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7mbqg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.541427 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.581621 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.581914 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.081890703 +0000 UTC m=+159.054202254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.582190 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.582518 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.082511063 +0000 UTC m=+159.054822614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.582956 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-th8pq" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.683246 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.702499 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.202471848 +0000 UTC m=+159.174783399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.741467 4758 csr.go:261] certificate signing request csr-2b69t is approved, waiting to be issued Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.821979 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.822510 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.322491145 +0000 UTC m=+159.294802696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.864462 4758 csr.go:257] certificate signing request csr-2b69t is issued Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.865199 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q6p4h" podStartSLOduration=138.865183389 podStartE2EDuration="2m18.865183389s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:33.736136868 +0000 UTC m=+158.708448419" watchObservedRunningTime="2026-01-30 08:32:33.865183389 +0000 UTC m=+158.837494940" Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.923769 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.923851 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.423834585 +0000 UTC m=+159.396146136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:33 crc kubenswrapper[4758]: I0130 08:32:33.924054 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:33 crc kubenswrapper[4758]: E0130 08:32:33.924444 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.424436144 +0000 UTC m=+159.396747695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.024808 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.025232 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.525217005 +0000 UTC m=+159.497528556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.126207 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.126489 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.626477672 +0000 UTC m=+159.598789213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.227435 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.227568 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.727552103 +0000 UTC m=+159.699863654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.227616 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.227893 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.727885834 +0000 UTC m=+159.700197385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.328255 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.328401 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.828382206 +0000 UTC m=+159.800693757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.328621 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.329148 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.82913728 +0000 UTC m=+159.801448831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.429260 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.429468 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.929440697 +0000 UTC m=+159.901752258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.429606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.429903 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:34.929890761 +0000 UTC m=+159.902202312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.520710 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:34 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:34 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:34 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.520767 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.530414 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.530628 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.030613551 +0000 UTC m=+160.002925102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.548979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"c197a1884b2b080ce4fb7425f6862d556aee1a9da27a44b8623a4fc1fa6bc934"} Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.631829 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.632438 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.132425444 +0000 UTC m=+160.104736985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.634346 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.652641 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hrqb6" Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.735060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.736006 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.235989583 +0000 UTC m=+160.208301134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.836392 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.837436 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.337425376 +0000 UTC m=+160.309736927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.866223 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 08:27:33 +0000 UTC, rotation deadline is 2026-12-05 02:51:06.848250898 +0000 UTC Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.866260 4758 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7410h18m31.981994275s for next certificate rotation Jan 30 08:32:34 crc kubenswrapper[4758]: I0130 08:32:34.937160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:34 crc kubenswrapper[4758]: E0130 08:32:34.937345 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.437315019 +0000 UTC m=+160.409626580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.038254 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.038547 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.538533225 +0000 UTC m=+160.510844776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.139240 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.139513 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.639498072 +0000 UTC m=+160.611809623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.240449 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.240926 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.740909044 +0000 UTC m=+160.713220585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.243668 4758 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.341455 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.341636 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.841611953 +0000 UTC m=+160.813923504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.342105 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.342421 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.842408068 +0000 UTC m=+160.814719619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.443163 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.443382 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.943362275 +0000 UTC m=+160.915673836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.443784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.444115 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:35.944104658 +0000 UTC m=+160.916416209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.474740 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.521977 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:35 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:35 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:35 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.522303 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.544879 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.545174 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.045158368 +0000 UTC m=+161.017469919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.545452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.545888 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.045872411 +0000 UTC m=+161.018183962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.556158 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"c739a4289cc899bcfb2c576273550aa7de620d41a114a199724648406e243a1a"} Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.556432 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" event={"ID":"24c14c8a-2e57-452b-b70b-646c1e2bac06","Type":"ContainerStarted","Data":"591c8c89a920896f268a3c05082b81165474b1e0e6c84907c97a3efbebe19713"} Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.560565 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.560603 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.561557 4758 patch_prober.go:28] interesting pod/console-f9d7485db-rxgh6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.561585 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rxgh6" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.602483 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.603423 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.605672 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.648180 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.648467 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.148451799 +0000 UTC m=+161.120763350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.649276 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.149257234 +0000 UTC m=+161.121568785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.649889 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.650025 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.650278 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.650671 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.656021 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7h2zc" podStartSLOduration=11.656004117 podStartE2EDuration="11.656004117s" podCreationTimestamp="2026-01-30 08:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:35.636596675 +0000 UTC m=+160.608908226" watchObservedRunningTime="2026-01-30 08:32:35.656004117 +0000 UTC m=+160.628315658" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.675261 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.752645 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.752969 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.252937097 +0000 UTC m=+161.225248648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.754024 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.754574 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.754495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.755105 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.255040093 +0000 UTC m=+161.227351644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.755497 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.755606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.756100 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.802325 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.803548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.805959 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") pod \"certified-operators-xpchq\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.809542 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.835309 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.857328 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.857591 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.857634 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.857669 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.858160 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.358133757 +0000 UTC m=+161.330445308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.918764 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.946389 4758 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T08:32:35.243687671Z","Handler":null,"Name":""} Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.960757 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961075 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961108 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961128 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961977 4758 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.962110 4758 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.961988 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:35 crc kubenswrapper[4758]: E0130 08:32:35.962016 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 08:32:36.462005716 +0000 UTC m=+161.434317267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fd88w" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.988216 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:32:35 crc kubenswrapper[4758]: I0130 08:32:35.990020 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.003792 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") pod \"community-operators-fvrs2\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.012721 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.062445 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.062826 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.062970 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.063078 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.115255 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.148615 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.166748 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.166790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.166811 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.166844 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.167529 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.167984 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.189627 4758 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.189660 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.216112 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.217114 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.220516 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") pod \"certified-operators-9h5f5\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.245447 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.268667 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.268717 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.268742 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.330463 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.336350 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.349215 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fd88w\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.369773 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.369867 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.369890 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.370286 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.370484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.395884 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") pod \"community-operators-6vchq\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: W0130 08:32:36.397754 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad53cc1a_5ef4_4e05_a996_b8e53194ef37.slice/crio-7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027 WatchSource:0}: Error finding container 7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027: Status 404 returned error can't find the container with id 7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027 Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.416984 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.417023 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.422864 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.438387 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.498059 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.498096 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.498489 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.498506 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.520466 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.526358 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:36 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:36 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:36 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.526407 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.550941 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.593151 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerStarted","Data":"7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027"} Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.600557 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.613321 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ct6sj" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.815559 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.816983 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.841121 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.844276 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.868687 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.869522 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.887737 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.887808 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.923449 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.992354 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.992406 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:36 crc kubenswrapper[4758]: I0130 08:32:36.993573 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.053295 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.171451 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.201080 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.409486 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.515483 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.516463 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.541085 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.541180 4758 patch_prober.go:28] interesting pod/router-default-5444994796-rmc6n container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 08:32:37 crc kubenswrapper[4758]: [-]has-synced failed: reason withheld Jan 30 08:32:37 crc kubenswrapper[4758]: [+]process-running ok Jan 30 08:32:37 crc kubenswrapper[4758]: healthz check failed Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.541219 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rmc6n" podUID="6df11515-6ad6-40b0-bd21-fc92e2eaeca6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.541417 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.542761 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.587375 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.588331 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.606757 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.618343 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.622392 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639177 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639280 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639326 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639457 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.639506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.650180 4758 generic.go:334] "Generic (PLEG): container finished" podID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerID="d0f286c054cafb8d5e4a578a3edfde29dd4ca6bca803278ecb39d974e67cc4ab" exitCode=0 Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.650921 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerDied","Data":"d0f286c054cafb8d5e4a578a3edfde29dd4ca6bca803278ecb39d974e67cc4ab"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.650949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerStarted","Data":"40c380f8967105b9213cda664c7b3c8de52ef7813797c018b751e70a571fe4ad"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.655970 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.668676 4758 generic.go:334] "Generic (PLEG): container finished" podID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerID="dab894bc290593f6bf4efa3b73a1fcf4864265047195960150260493f5158b78" exitCode=0 Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.668762 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerDied","Data":"dab894bc290593f6bf4efa3b73a1fcf4864265047195960150260493f5158b78"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.671313 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerStarted","Data":"c0f04b21f482623d1676f95773767fabe33d02396d39eb436e9aa555e26ff568"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.671353 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerStarted","Data":"8066581e913c0a6adf8368222fb0eb4020a0da93b298f0c82e13a38f877da9d3"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.677981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" event={"ID":"467f100f-83e4-43b0-bcf0-16cfe7cb0393","Type":"ContainerStarted","Data":"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.678013 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" event={"ID":"467f100f-83e4-43b0-bcf0-16cfe7cb0393","Type":"ContainerStarted","Data":"cf6f00a97d6458a9fc656f00daeea6735f190156c2f34abea4396149d2c34aee"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.678476 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.697551 4758 generic.go:334] "Generic (PLEG): container finished" podID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerID="4fd34a8816ab9b8ac36f3ee7f4fccd351c108f7b629f6428e022613e9b4c1cfd" exitCode=0 Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.698475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerDied","Data":"4fd34a8816ab9b8ac36f3ee7f4fccd351c108f7b629f6428e022613e9b4c1cfd"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.698500 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerStarted","Data":"0bb05ce1b23a66a7b1423906b5cec1cc9cdc27900f23c97851de922a49efcf1f"} Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740686 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740797 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740837 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740857 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.740968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.741344 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.752540 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.753159 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.774699 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.783244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") pod \"redhat-marketplace-x78bc\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.800233 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.819622 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" podStartSLOduration=142.819606826 podStartE2EDuration="2m22.819606826s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:37.789000523 +0000 UTC m=+162.761312074" watchObservedRunningTime="2026-01-30 08:32:37.819606826 +0000 UTC m=+162.791918377" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.863615 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.934575 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.985665 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:32:37 crc kubenswrapper[4758]: I0130 08:32:37.986874 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.010009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.044542 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.044625 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.044647 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.145856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.145902 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.145953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.145970 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.146421 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.146657 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.152681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4-metrics-certs\") pod \"network-metrics-daemon-gj6b4\" (UID: \"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4\") " pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.183500 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gj6b4" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.198091 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") pod \"redhat-marketplace-bc5l2\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.339262 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.527501 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.537369 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rmc6n" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.697226 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.770288 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4457cdf7-9bd2-43e6-8476-22076ded6dce","Type":"ContainerStarted","Data":"75f9050b0da8bff9b8eb3eabaa32d848410f146637e7bd947ed0cbfdcd3500bd"} Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.770341 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4457cdf7-9bd2-43e6-8476-22076ded6dce","Type":"ContainerStarted","Data":"4afe439e516b8e3dd4a34e3ae337b9ee56dd97e1d9906a6c1c080bbcf4203875"} Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.785002 4758 generic.go:334] "Generic (PLEG): container finished" podID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerID="c0f04b21f482623d1676f95773767fabe33d02396d39eb436e9aa555e26ff568" exitCode=0 Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.785663 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerDied","Data":"c0f04b21f482623d1676f95773767fabe33d02396d39eb436e9aa555e26ff568"} Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.879699 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.879678747 podStartE2EDuration="2.879678747s" podCreationTimestamp="2026-01-30 08:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:38.817719707 +0000 UTC m=+163.790031268" watchObservedRunningTime="2026-01-30 08:32:38.879678747 +0000 UTC m=+163.851990298" Jan 30 08:32:38 crc kubenswrapper[4758]: I0130 08:32:38.882931 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.039365 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.040421 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.044707 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.049774 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.085137 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.085199 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.085224 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.199155 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.199599 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.199624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.200261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.200376 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.249563 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") pod \"redhat-operators-zqdsb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.385569 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.386644 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.396202 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.406528 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.443102 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.492047 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gj6b4"] Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.516871 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.516938 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.517012 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.617808 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.618308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.618355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.618807 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.619095 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.647738 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") pod \"redhat-operators-w9whf\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.727939 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.920412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" event={"ID":"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4","Type":"ContainerStarted","Data":"f0b4a85eb0f41a9b2c411c4e5a860316719ec21a0a5965812a89ce9fda0feac2"} Jan 30 08:32:39 crc kubenswrapper[4758]: I0130 08:32:39.940432 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7ee9b4bc-c91f-454e-b176-411e75de16c8","Type":"ContainerStarted","Data":"9a4bb59357a96ef8f4c954872cd23d4914070e5b4e01845ca5e858c6cfe1ea2f"} Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.028310 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerStarted","Data":"1dd118451b6794cd44f2978fc2e3be0c8174873ed3e0e0c9a87ba8e5481221b7"} Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.034661 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerStarted","Data":"15ee256dfea5816a52c886c11283e7357d33ef2c840a69e7c266baa52c35e543"} Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.034695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerStarted","Data":"b94b087d70a109782aecba154dc0988599437def55360286dd53d57c7243892b"} Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.154742 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:32:40 crc kubenswrapper[4758]: I0130 08:32:40.501414 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.096777 4758 generic.go:334] "Generic (PLEG): container finished" podID="498010c8-fcda-4462-864d-88d7f70c2d54" containerID="500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.096829 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerDied","Data":"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.096852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerStarted","Data":"3ca3f01b986f56ea3541c35403f896246a5783f95ef8ac463437f4a2145f8bd8"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.129687 4758 generic.go:334] "Generic (PLEG): container finished" podID="7ee9b4bc-c91f-454e-b176-411e75de16c8" containerID="299e4a55ae6a6d9cdee83282fd6681484abb006c9d4b5da01dca9cc302d06082" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.130029 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7ee9b4bc-c91f-454e-b176-411e75de16c8","Type":"ContainerDied","Data":"299e4a55ae6a6d9cdee83282fd6681484abb006c9d4b5da01dca9cc302d06082"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.138508 4758 generic.go:334] "Generic (PLEG): container finished" podID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerID="08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.138565 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerDied","Data":"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.138591 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerStarted","Data":"2d31c2681539f71b539b22493850f84cbd075530d6103b3a81f2ee6e83bc8595"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.153174 4758 generic.go:334] "Generic (PLEG): container finished" podID="a8160a87-6f56-4d61-a17b-8049588a293b" containerID="15ee256dfea5816a52c886c11283e7357d33ef2c840a69e7c266baa52c35e543" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.153251 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerDied","Data":"15ee256dfea5816a52c886c11283e7357d33ef2c840a69e7c266baa52c35e543"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.196990 4758 generic.go:334] "Generic (PLEG): container finished" podID="9313ed67-0218-4d32-adf7-710ba67de622" containerID="166bda0445ccb2dd7e9715331b62d1e0995dd20e40dce82e327a772dd4e0caf9" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.197110 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" event={"ID":"9313ed67-0218-4d32-adf7-710ba67de622","Type":"ContainerDied","Data":"166bda0445ccb2dd7e9715331b62d1e0995dd20e40dce82e327a772dd4e0caf9"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.279282 4758 generic.go:334] "Generic (PLEG): container finished" podID="3a74af45-ed4e-4d30-b686-942663e223c6" containerID="9f60c8fda6e7e66a9633ce66ac886aa456dd7fafe83d70de6130a7dc9a89f0d9" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.279352 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerDied","Data":"9f60c8fda6e7e66a9633ce66ac886aa456dd7fafe83d70de6130a7dc9a89f0d9"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.290964 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" event={"ID":"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4","Type":"ContainerStarted","Data":"79d424bc37f43fcefbc74bf7e2029f041d2346c19453955c5f2bf28b84c0904f"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.346784 4758 generic.go:334] "Generic (PLEG): container finished" podID="4457cdf7-9bd2-43e6-8476-22076ded6dce" containerID="75f9050b0da8bff9b8eb3eabaa32d848410f146637e7bd947ed0cbfdcd3500bd" exitCode=0 Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.346830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4457cdf7-9bd2-43e6-8476-22076ded6dce","Type":"ContainerDied","Data":"75f9050b0da8bff9b8eb3eabaa32d848410f146637e7bd947ed0cbfdcd3500bd"} Jan 30 08:32:41 crc kubenswrapper[4758]: I0130 08:32:41.721647 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ggszh" Jan 30 08:32:42 crc kubenswrapper[4758]: I0130 08:32:42.423964 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gj6b4" event={"ID":"83c5d8fe-c1cc-4335-85ab-0d7ad31f92c4","Type":"ContainerStarted","Data":"ab783e3ce5ee2a8913d69afb7c2d2b29f1e6ea0e5cdd7ecf1e01d9153d273e02"} Jan 30 08:32:42 crc kubenswrapper[4758]: I0130 08:32:42.450672 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gj6b4" podStartSLOduration=147.450629996 podStartE2EDuration="2m27.450629996s" podCreationTimestamp="2026-01-30 08:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:32:42.443131079 +0000 UTC m=+167.415442630" watchObservedRunningTime="2026-01-30 08:32:42.450629996 +0000 UTC m=+167.422941547" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.082957 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.123510 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.138059 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") pod \"4457cdf7-9bd2-43e6-8476-22076ded6dce\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.138132 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") pod \"4457cdf7-9bd2-43e6-8476-22076ded6dce\" (UID: \"4457cdf7-9bd2-43e6-8476-22076ded6dce\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.138604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4457cdf7-9bd2-43e6-8476-22076ded6dce" (UID: "4457cdf7-9bd2-43e6-8476-22076ded6dce"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.145292 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4457cdf7-9bd2-43e6-8476-22076ded6dce" (UID: "4457cdf7-9bd2-43e6-8476-22076ded6dce"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.175119 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245513 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") pod \"9313ed67-0218-4d32-adf7-710ba67de622\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245618 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") pod \"9313ed67-0218-4d32-adf7-710ba67de622\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245646 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") pod \"9313ed67-0218-4d32-adf7-710ba67de622\" (UID: \"9313ed67-0218-4d32-adf7-710ba67de622\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245660 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") pod \"7ee9b4bc-c91f-454e-b176-411e75de16c8\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245683 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") pod \"7ee9b4bc-c91f-454e-b176-411e75de16c8\" (UID: \"7ee9b4bc-c91f-454e-b176-411e75de16c8\") " Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245932 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4457cdf7-9bd2-43e6-8476-22076ded6dce-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.245948 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4457cdf7-9bd2-43e6-8476-22076ded6dce-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.247453 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume" (OuterVolumeSpecName: "config-volume") pod "9313ed67-0218-4d32-adf7-710ba67de622" (UID: "9313ed67-0218-4d32-adf7-710ba67de622"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.248597 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7ee9b4bc-c91f-454e-b176-411e75de16c8" (UID: "7ee9b4bc-c91f-454e-b176-411e75de16c8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.251669 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7ee9b4bc-c91f-454e-b176-411e75de16c8" (UID: "7ee9b4bc-c91f-454e-b176-411e75de16c8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.253319 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7" (OuterVolumeSpecName: "kube-api-access-8frm7") pod "9313ed67-0218-4d32-adf7-710ba67de622" (UID: "9313ed67-0218-4d32-adf7-710ba67de622"). InnerVolumeSpecName "kube-api-access-8frm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.254770 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9313ed67-0218-4d32-adf7-710ba67de622" (UID: "9313ed67-0218-4d32-adf7-710ba67de622"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347479 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8frm7\" (UniqueName: \"kubernetes.io/projected/9313ed67-0218-4d32-adf7-710ba67de622-kube-api-access-8frm7\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347509 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9313ed67-0218-4d32-adf7-710ba67de622-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347518 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9313ed67-0218-4d32-adf7-710ba67de622-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347527 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee9b4bc-c91f-454e-b176-411e75de16c8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.347536 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee9b4bc-c91f-454e-b176-411e75de16c8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.474946 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4457cdf7-9bd2-43e6-8476-22076ded6dce","Type":"ContainerDied","Data":"4afe439e516b8e3dd4a34e3ae337b9ee56dd97e1d9906a6c1c080bbcf4203875"} Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.474989 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4afe439e516b8e3dd4a34e3ae337b9ee56dd97e1d9906a6c1c080bbcf4203875" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.475069 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.484680 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7ee9b4bc-c91f-454e-b176-411e75de16c8","Type":"ContainerDied","Data":"9a4bb59357a96ef8f4c954872cd23d4914070e5b4e01845ca5e858c6cfe1ea2f"} Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.484821 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4bb59357a96ef8f4c954872cd23d4914070e5b4e01845ca5e858c6cfe1ea2f" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.484878 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.537617 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.543863 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz" event={"ID":"9313ed67-0218-4d32-adf7-710ba67de622","Type":"ContainerDied","Data":"b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a"} Jan 30 08:32:43 crc kubenswrapper[4758]: I0130 08:32:43.543929 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b859f510e54f07bd33d4310127d5c0dd60f6d82ea983d54ae89524877d89e31a" Jan 30 08:32:45 crc kubenswrapper[4758]: I0130 08:32:45.569967 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:45 crc kubenswrapper[4758]: I0130 08:32:45.574432 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:32:46 crc kubenswrapper[4758]: I0130 08:32:46.497444 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:46 crc kubenswrapper[4758]: I0130 08:32:46.497768 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:46 crc kubenswrapper[4758]: I0130 08:32:46.497520 4758 patch_prober.go:28] interesting pod/downloads-7954f5f757-zp6d8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 30 08:32:46 crc kubenswrapper[4758]: I0130 08:32:46.497890 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zp6d8" podUID="b0313d23-ff69-4957-ad3b-d6adc246aad5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 30 08:32:52 crc kubenswrapper[4758]: I0130 08:32:52.387816 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:32:52 crc kubenswrapper[4758]: I0130 08:32:52.388671 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:32:56 crc kubenswrapper[4758]: I0130 08:32:56.432339 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:32:56 crc kubenswrapper[4758]: I0130 08:32:56.535467 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zp6d8" Jan 30 08:33:03 crc kubenswrapper[4758]: I0130 08:33:03.003254 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 08:33:06 crc kubenswrapper[4758]: I0130 08:33:06.950158 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hgtxv" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.034949 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.035669 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdh94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x78bc_openshift-marketplace(a8160a87-6f56-4d61-a17b-8049588a293b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.037960 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-x78bc" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706229 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.706709 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee9b4bc-c91f-454e-b176-411e75de16c8" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706726 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee9b4bc-c91f-454e-b176-411e75de16c8" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.706738 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9313ed67-0218-4d32-adf7-710ba67de622" containerName="collect-profiles" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706745 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9313ed67-0218-4d32-adf7-710ba67de622" containerName="collect-profiles" Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.706760 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4457cdf7-9bd2-43e6-8476-22076ded6dce" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706766 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="4457cdf7-9bd2-43e6-8476-22076ded6dce" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706859 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9313ed67-0218-4d32-adf7-710ba67de622" containerName="collect-profiles" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706867 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee9b4bc-c91f-454e-b176-411e75de16c8" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.706877 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="4457cdf7-9bd2-43e6-8476-22076ded6dce" containerName="pruner" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.707223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.709768 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.711090 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.720187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.849257 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerStarted","Data":"007406a2a74633f2e162b319a26aced276d646abac7a003472befb9eada34dd7"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.854687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerStarted","Data":"b5bdd4b46ac12fea35ffe9c6fead48f31f038e946902c61ae00ea7909af021e0"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.861737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerStarted","Data":"9b438ecbdf2666ed95996153c185f9d9cd9dfced1f243e09c5abfee072414795"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.873206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerStarted","Data":"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.881907 4758 generic.go:334] "Generic (PLEG): container finished" podID="3a74af45-ed4e-4d30-b686-942663e223c6" containerID="135ca0181a7c33753604da0d83c5eecfea5b146962a0bc2ffbf124c120de0fb0" exitCode=0 Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.881981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerDied","Data":"135ca0181a7c33753604da0d83c5eecfea5b146962a0bc2ffbf124c120de0fb0"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.888398 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.888487 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.888524 4758 generic.go:334] "Generic (PLEG): container finished" podID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerID="c3f68acbef392b308c2ed3b35e3acc112673547e011fad5471bd97a0db59032a" exitCode=0 Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.888574 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerDied","Data":"c3f68acbef392b308c2ed3b35e3acc112673547e011fad5471bd97a0db59032a"} Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.895815 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerStarted","Data":"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0"} Jan 30 08:33:14 crc kubenswrapper[4758]: E0130 08:33:14.912459 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x78bc" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.990299 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.990450 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:14 crc kubenswrapper[4758]: I0130 08:33:14.990922 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.011946 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.037262 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.433732 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 08:33:15 crc kubenswrapper[4758]: W0130 08:33:15.451239 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod264e5b45_414d_4f1b_b3af_1d9d762ba6b8.slice/crio-1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb WatchSource:0}: Error finding container 1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb: Status 404 returned error can't find the container with id 1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.913232 4758 generic.go:334] "Generic (PLEG): container finished" podID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerID="007406a2a74633f2e162b319a26aced276d646abac7a003472befb9eada34dd7" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.913576 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerDied","Data":"007406a2a74633f2e162b319a26aced276d646abac7a003472befb9eada34dd7"} Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.915958 4758 generic.go:334] "Generic (PLEG): container finished" podID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerID="b5bdd4b46ac12fea35ffe9c6fead48f31f038e946902c61ae00ea7909af021e0" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.916115 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerDied","Data":"b5bdd4b46ac12fea35ffe9c6fead48f31f038e946902c61ae00ea7909af021e0"} Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.917778 4758 generic.go:334] "Generic (PLEG): container finished" podID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerID="9b438ecbdf2666ed95996153c185f9d9cd9dfced1f243e09c5abfee072414795" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.917986 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerDied","Data":"9b438ecbdf2666ed95996153c185f9d9cd9dfced1f243e09c5abfee072414795"} Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.918816 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"264e5b45-414d-4f1b-b3af-1d9d762ba6b8","Type":"ContainerStarted","Data":"1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb"} Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.921295 4758 generic.go:334] "Generic (PLEG): container finished" podID="498010c8-fcda-4462-864d-88d7f70c2d54" containerID="ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0" exitCode=0 Jan 30 08:33:15 crc kubenswrapper[4758]: I0130 08:33:15.921550 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerDied","Data":"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0"} Jan 30 08:33:16 crc kubenswrapper[4758]: I0130 08:33:16.927907 4758 generic.go:334] "Generic (PLEG): container finished" podID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerID="fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705" exitCode=0 Jan 30 08:33:16 crc kubenswrapper[4758]: I0130 08:33:16.927989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerDied","Data":"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705"} Jan 30 08:33:16 crc kubenswrapper[4758]: I0130 08:33:16.931493 4758 generic.go:334] "Generic (PLEG): container finished" podID="264e5b45-414d-4f1b-b3af-1d9d762ba6b8" containerID="1be38ecc028a8deb5c4ef9c4dc42d8b9e613204cca0933664cd0b32822167804" exitCode=0 Jan 30 08:33:16 crc kubenswrapper[4758]: I0130 08:33:16.931525 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"264e5b45-414d-4f1b-b3af-1d9d762ba6b8","Type":"ContainerDied","Data":"1be38ecc028a8deb5c4ef9c4dc42d8b9e613204cca0933664cd0b32822167804"} Jan 30 08:33:17 crc kubenswrapper[4758]: I0130 08:33:17.938217 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerStarted","Data":"0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5"} Jan 30 08:33:17 crc kubenswrapper[4758]: I0130 08:33:17.964470 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xpchq" podStartSLOduration=4.042059664 podStartE2EDuration="42.964448504s" podCreationTimestamp="2026-01-30 08:32:35 +0000 UTC" firstStartedPulling="2026-01-30 08:32:37.687351814 +0000 UTC m=+162.659663365" lastFinishedPulling="2026-01-30 08:33:16.609740654 +0000 UTC m=+201.582052205" observedRunningTime="2026-01-30 08:33:17.954338489 +0000 UTC m=+202.926650040" watchObservedRunningTime="2026-01-30 08:33:17.964448504 +0000 UTC m=+202.936760055" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.172260 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.341518 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") pod \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.341631 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") pod \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\" (UID: \"264e5b45-414d-4f1b-b3af-1d9d762ba6b8\") " Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.341660 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "264e5b45-414d-4f1b-b3af-1d9d762ba6b8" (UID: "264e5b45-414d-4f1b-b3af-1d9d762ba6b8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.341991 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.347332 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "264e5b45-414d-4f1b-b3af-1d9d762ba6b8" (UID: "264e5b45-414d-4f1b-b3af-1d9d762ba6b8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.443090 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e5b45-414d-4f1b-b3af-1d9d762ba6b8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.953470 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.953702 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"264e5b45-414d-4f1b-b3af-1d9d762ba6b8","Type":"ContainerDied","Data":"1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb"} Jan 30 08:33:18 crc kubenswrapper[4758]: I0130 08:33:18.954464 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed401a58b6047053ab565c4559873481cf778bfbac04162ed0b3e76ba29d9cb" Jan 30 08:33:20 crc kubenswrapper[4758]: I0130 08:33:20.963270 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerStarted","Data":"85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d"} Jan 30 08:33:20 crc kubenswrapper[4758]: I0130 08:33:20.966378 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerStarted","Data":"b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206"} Jan 30 08:33:20 crc kubenswrapper[4758]: I0130 08:33:20.982772 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6vchq" podStartSLOduration=3.118554465 podStartE2EDuration="44.982752587s" podCreationTimestamp="2026-01-30 08:32:36 +0000 UTC" firstStartedPulling="2026-01-30 08:32:37.687711415 +0000 UTC m=+162.660022956" lastFinishedPulling="2026-01-30 08:33:19.551909517 +0000 UTC m=+204.524221078" observedRunningTime="2026-01-30 08:33:20.980671012 +0000 UTC m=+205.952982563" watchObservedRunningTime="2026-01-30 08:33:20.982752587 +0000 UTC m=+205.955064128" Jan 30 08:33:20 crc kubenswrapper[4758]: I0130 08:33:20.998716 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fvrs2" podStartSLOduration=4.890043124 podStartE2EDuration="45.998699635s" podCreationTimestamp="2026-01-30 08:32:35 +0000 UTC" firstStartedPulling="2026-01-30 08:32:37.699538608 +0000 UTC m=+162.671850159" lastFinishedPulling="2026-01-30 08:33:18.808195119 +0000 UTC m=+203.780506670" observedRunningTime="2026-01-30 08:33:20.997307141 +0000 UTC m=+205.969618692" watchObservedRunningTime="2026-01-30 08:33:20.998699635 +0000 UTC m=+205.971011186" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.101297 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:33:21 crc kubenswrapper[4758]: E0130 08:33:21.101511 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264e5b45-414d-4f1b-b3af-1d9d762ba6b8" containerName="pruner" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.101523 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="264e5b45-414d-4f1b-b3af-1d9d762ba6b8" containerName="pruner" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.101609 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="264e5b45-414d-4f1b-b3af-1d9d762ba6b8" containerName="pruner" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.101958 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.103533 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.103643 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.116429 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.178362 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.178444 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.178478 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281165 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281234 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281267 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281347 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.281386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.307913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") pod \"installer-9-crc\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.414822 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:33:21 crc kubenswrapper[4758]: I0130 08:33:21.973217 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerStarted","Data":"dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624"} Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.015915 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9h5f5" podStartSLOduration=3.483218595 podStartE2EDuration="47.015889769s" podCreationTimestamp="2026-01-30 08:32:35 +0000 UTC" firstStartedPulling="2026-01-30 08:32:37.655643566 +0000 UTC m=+162.627955117" lastFinishedPulling="2026-01-30 08:33:21.18831474 +0000 UTC m=+206.160626291" observedRunningTime="2026-01-30 08:33:22.01146998 +0000 UTC m=+206.983781541" watchObservedRunningTime="2026-01-30 08:33:22.015889769 +0000 UTC m=+206.988201320" Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.387803 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.388152 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.388195 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.388704 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.388797 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916" gracePeriod=600 Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.979380 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916" exitCode=0 Jan 30 08:33:22 crc kubenswrapper[4758]: I0130 08:33:22.980112 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916"} Jan 30 08:33:24 crc kubenswrapper[4758]: I0130 08:33:24.199834 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 08:33:24 crc kubenswrapper[4758]: W0130 08:33:24.201959 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod05bc5468_8b30_42ea_a229_dba54dddcdaf.slice/crio-f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68 WatchSource:0}: Error finding container f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68: Status 404 returned error can't find the container with id f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68 Jan 30 08:33:24 crc kubenswrapper[4758]: I0130 08:33:24.990660 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05bc5468-8b30-42ea-a229-dba54dddcdaf","Type":"ContainerStarted","Data":"f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68"} Jan 30 08:33:25 crc kubenswrapper[4758]: I0130 08:33:25.165881 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:33:25 crc kubenswrapper[4758]: I0130 08:33:25.918949 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:33:25 crc kubenswrapper[4758]: I0130 08:33:25.924157 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.005375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerStarted","Data":"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.010977 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.013591 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerStarted","Data":"bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.015649 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05bc5468-8b30-42ea-a229-dba54dddcdaf","Type":"ContainerStarted","Data":"680d53edcf23e988cb0b58dfc2997e729a64c4eace90175659a7caa821078c76"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.018120 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerStarted","Data":"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2"} Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.060909 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqdsb" podStartSLOduration=5.677304805 podStartE2EDuration="48.060887833s" podCreationTimestamp="2026-01-30 08:32:38 +0000 UTC" firstStartedPulling="2026-01-30 08:32:41.143133529 +0000 UTC m=+166.115445080" lastFinishedPulling="2026-01-30 08:33:23.526716557 +0000 UTC m=+208.499028108" observedRunningTime="2026-01-30 08:33:26.033432266 +0000 UTC m=+211.005743817" watchObservedRunningTime="2026-01-30 08:33:26.060887833 +0000 UTC m=+211.033199384" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.061565 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bc5l2" podStartSLOduration=6.543524545 podStartE2EDuration="49.061558165s" podCreationTimestamp="2026-01-30 08:32:37 +0000 UTC" firstStartedPulling="2026-01-30 08:32:41.2811067 +0000 UTC m=+166.253418251" lastFinishedPulling="2026-01-30 08:33:23.79914032 +0000 UTC m=+208.771451871" observedRunningTime="2026-01-30 08:33:26.060109589 +0000 UTC m=+211.032421160" watchObservedRunningTime="2026-01-30 08:33:26.061558165 +0000 UTC m=+211.033869716" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.077873 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w9whf" podStartSLOduration=3.8673670959999997 podStartE2EDuration="47.077848993s" podCreationTimestamp="2026-01-30 08:32:39 +0000 UTC" firstStartedPulling="2026-01-30 08:32:41.098457323 +0000 UTC m=+166.070768874" lastFinishedPulling="2026-01-30 08:33:24.30893922 +0000 UTC m=+209.281250771" observedRunningTime="2026-01-30 08:33:26.077071359 +0000 UTC m=+211.049382920" watchObservedRunningTime="2026-01-30 08:33:26.077848993 +0000 UTC m=+211.050160554" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.120957 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.12093824 podStartE2EDuration="5.12093824s" podCreationTimestamp="2026-01-30 08:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:33:26.12028758 +0000 UTC m=+211.092599141" watchObservedRunningTime="2026-01-30 08:33:26.12093824 +0000 UTC m=+211.093249791" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.150636 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.151516 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.289894 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.290576 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.331226 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.331509 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.369013 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.551698 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.551924 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:26 crc kubenswrapper[4758]: I0130 08:33:26.595722 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:27 crc kubenswrapper[4758]: I0130 08:33:27.081395 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:33:27 crc kubenswrapper[4758]: I0130 08:33:27.082717 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:27 crc kubenswrapper[4758]: I0130 08:33:27.089357 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:27 crc kubenswrapper[4758]: I0130 08:33:27.094329 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:33:28 crc kubenswrapper[4758]: I0130 08:33:28.340320 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:28 crc kubenswrapper[4758]: I0130 08:33:28.340620 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:28 crc kubenswrapper[4758]: I0130 08:33:28.415388 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.443701 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.445707 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.728498 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.728671 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.880524 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:33:29 crc kubenswrapper[4758]: I0130 08:33:29.881470 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9h5f5" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="registry-server" containerID="cri-o://dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624" gracePeriod=2 Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.041395 4758 generic.go:334] "Generic (PLEG): container finished" podID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerID="dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624" exitCode=0 Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.041450 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerDied","Data":"dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624"} Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.043324 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerStarted","Data":"b6863769607a53efb0adc05b3d382f626fe18a9026de5f17efe5402505ed63f4"} Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.283927 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.309058 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") pod \"34d332c4-fa91-4d24-9561-1b68c12a8224\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.309178 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") pod \"34d332c4-fa91-4d24-9561-1b68c12a8224\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.309302 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") pod \"34d332c4-fa91-4d24-9561-1b68c12a8224\" (UID: \"34d332c4-fa91-4d24-9561-1b68c12a8224\") " Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.309867 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities" (OuterVolumeSpecName: "utilities") pod "34d332c4-fa91-4d24-9561-1b68c12a8224" (UID: "34d332c4-fa91-4d24-9561-1b68c12a8224"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.327325 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2" (OuterVolumeSpecName: "kube-api-access-5lmb2") pod "34d332c4-fa91-4d24-9561-1b68c12a8224" (UID: "34d332c4-fa91-4d24-9561-1b68c12a8224"). InnerVolumeSpecName "kube-api-access-5lmb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.360674 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34d332c4-fa91-4d24-9561-1b68c12a8224" (UID: "34d332c4-fa91-4d24-9561-1b68c12a8224"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.411317 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.411350 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lmb2\" (UniqueName: \"kubernetes.io/projected/34d332c4-fa91-4d24-9561-1b68c12a8224-kube-api-access-5lmb2\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.411364 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34d332c4-fa91-4d24-9561-1b68c12a8224-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.482864 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.483080 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6vchq" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="registry-server" containerID="cri-o://85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d" gracePeriod=2 Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.489908 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqdsb" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" probeResult="failure" output=< Jan 30 08:33:30 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:33:30 crc kubenswrapper[4758]: > Jan 30 08:33:30 crc kubenswrapper[4758]: I0130 08:33:30.769834 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w9whf" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" probeResult="failure" output=< Jan 30 08:33:30 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:33:30 crc kubenswrapper[4758]: > Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.057927 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9h5f5" event={"ID":"34d332c4-fa91-4d24-9561-1b68c12a8224","Type":"ContainerDied","Data":"40c380f8967105b9213cda664c7b3c8de52ef7813797c018b751e70a571fe4ad"} Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.058333 4758 scope.go:117] "RemoveContainer" containerID="dedc2972d89437cd8baff742761e8dc72767583333c906fb9083152c08a20624" Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.058019 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9h5f5" Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.061580 4758 generic.go:334] "Generic (PLEG): container finished" podID="a8160a87-6f56-4d61-a17b-8049588a293b" containerID="b6863769607a53efb0adc05b3d382f626fe18a9026de5f17efe5402505ed63f4" exitCode=0 Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.061614 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerDied","Data":"b6863769607a53efb0adc05b3d382f626fe18a9026de5f17efe5402505ed63f4"} Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.083265 4758 scope.go:117] "RemoveContainer" containerID="007406a2a74633f2e162b319a26aced276d646abac7a003472befb9eada34dd7" Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.134321 4758 scope.go:117] "RemoveContainer" containerID="d0f286c054cafb8d5e4a578a3edfde29dd4ca6bca803278ecb39d974e67cc4ab" Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.136072 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.143963 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9h5f5"] Jan 30 08:33:31 crc kubenswrapper[4758]: I0130 08:33:31.774060 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" path="/var/lib/kubelet/pods/34d332c4-fa91-4d24-9561-1b68c12a8224/volumes" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.069720 4758 generic.go:334] "Generic (PLEG): container finished" podID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerID="85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d" exitCode=0 Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.069802 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerDied","Data":"85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d"} Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.254239 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.336110 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") pod \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.336176 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") pod \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.336290 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") pod \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\" (UID: \"83ea0fe7-14a8-4194-abfe-dfc8634b8acf\") " Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.337243 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities" (OuterVolumeSpecName: "utilities") pod "83ea0fe7-14a8-4194-abfe-dfc8634b8acf" (UID: "83ea0fe7-14a8-4194-abfe-dfc8634b8acf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.347594 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9" (OuterVolumeSpecName: "kube-api-access-n9dg9") pod "83ea0fe7-14a8-4194-abfe-dfc8634b8acf" (UID: "83ea0fe7-14a8-4194-abfe-dfc8634b8acf"). InnerVolumeSpecName "kube-api-access-n9dg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.395012 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83ea0fe7-14a8-4194-abfe-dfc8634b8acf" (UID: "83ea0fe7-14a8-4194-abfe-dfc8634b8acf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.437594 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.437637 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9dg9\" (UniqueName: \"kubernetes.io/projected/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-kube-api-access-n9dg9\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:32 crc kubenswrapper[4758]: I0130 08:33:32.437649 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ea0fe7-14a8-4194-abfe-dfc8634b8acf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.078884 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerStarted","Data":"d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16"} Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.081840 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vchq" event={"ID":"83ea0fe7-14a8-4194-abfe-dfc8634b8acf","Type":"ContainerDied","Data":"8066581e913c0a6adf8368222fb0eb4020a0da93b298f0c82e13a38f877da9d3"} Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.081907 4758 scope.go:117] "RemoveContainer" containerID="85d9eb90a4febee9431f88cec93f5cc8f121a642e4658b71fc03987881601f5d" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.081910 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vchq" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.096597 4758 scope.go:117] "RemoveContainer" containerID="9b438ecbdf2666ed95996153c185f9d9cd9dfced1f243e09c5abfee072414795" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.117615 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.124875 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6vchq"] Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.133070 4758 scope.go:117] "RemoveContainer" containerID="c0f04b21f482623d1676f95773767fabe33d02396d39eb436e9aa555e26ff568" Jan 30 08:33:33 crc kubenswrapper[4758]: I0130 08:33:33.777297 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" path="/var/lib/kubelet/pods/83ea0fe7-14a8-4194-abfe-dfc8634b8acf/volumes" Jan 30 08:33:34 crc kubenswrapper[4758]: I0130 08:33:34.110594 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x78bc" podStartSLOduration=5.549672587 podStartE2EDuration="57.11056906s" podCreationTimestamp="2026-01-30 08:32:37 +0000 UTC" firstStartedPulling="2026-01-30 08:32:41.185529602 +0000 UTC m=+166.157841153" lastFinishedPulling="2026-01-30 08:33:32.746426065 +0000 UTC m=+217.718737626" observedRunningTime="2026-01-30 08:33:34.107437942 +0000 UTC m=+219.079749493" watchObservedRunningTime="2026-01-30 08:33:34.11056906 +0000 UTC m=+219.082880611" Jan 30 08:33:37 crc kubenswrapper[4758]: I0130 08:33:37.936141 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:33:37 crc kubenswrapper[4758]: I0130 08:33:37.936571 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:33:37 crc kubenswrapper[4758]: I0130 08:33:37.976951 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:33:38 crc kubenswrapper[4758]: I0130 08:33:38.142903 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:33:38 crc kubenswrapper[4758]: I0130 08:33:38.375979 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:39 crc kubenswrapper[4758]: I0130 08:33:39.484916 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:33:39 crc kubenswrapper[4758]: I0130 08:33:39.531064 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:33:39 crc kubenswrapper[4758]: I0130 08:33:39.763235 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:39 crc kubenswrapper[4758]: I0130 08:33:39.800490 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:40 crc kubenswrapper[4758]: I0130 08:33:40.879858 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:33:40 crc kubenswrapper[4758]: I0130 08:33:40.880599 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bc5l2" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="registry-server" containerID="cri-o://bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554" gracePeriod=2 Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.125455 4758 generic.go:334] "Generic (PLEG): container finished" podID="3a74af45-ed4e-4d30-b686-942663e223c6" containerID="bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554" exitCode=0 Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.126030 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerDied","Data":"bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554"} Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.806964 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.907442 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") pod \"3a74af45-ed4e-4d30-b686-942663e223c6\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.907530 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") pod \"3a74af45-ed4e-4d30-b686-942663e223c6\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.907557 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") pod \"3a74af45-ed4e-4d30-b686-942663e223c6\" (UID: \"3a74af45-ed4e-4d30-b686-942663e223c6\") " Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.908429 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities" (OuterVolumeSpecName: "utilities") pod "3a74af45-ed4e-4d30-b686-942663e223c6" (UID: "3a74af45-ed4e-4d30-b686-942663e223c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.917253 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf" (OuterVolumeSpecName: "kube-api-access-xb7gf") pod "3a74af45-ed4e-4d30-b686-942663e223c6" (UID: "3a74af45-ed4e-4d30-b686-942663e223c6"). InnerVolumeSpecName "kube-api-access-xb7gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:41 crc kubenswrapper[4758]: I0130 08:33:41.928816 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a74af45-ed4e-4d30-b686-942663e223c6" (UID: "3a74af45-ed4e-4d30-b686-942663e223c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.008762 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.008795 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb7gf\" (UniqueName: \"kubernetes.io/projected/3a74af45-ed4e-4d30-b686-942663e223c6-kube-api-access-xb7gf\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.008806 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a74af45-ed4e-4d30-b686-942663e223c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.142426 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bc5l2" event={"ID":"3a74af45-ed4e-4d30-b686-942663e223c6","Type":"ContainerDied","Data":"1dd118451b6794cd44f2978fc2e3be0c8174873ed3e0e0c9a87ba8e5481221b7"} Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.142514 4758 scope.go:117] "RemoveContainer" containerID="bf1daaf06aa1582152a53746198f0e79dd8b28ceb6882633260dce64e46fe554" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.142586 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bc5l2" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.178300 4758 scope.go:117] "RemoveContainer" containerID="135ca0181a7c33753604da0d83c5eecfea5b146962a0bc2ffbf124c120de0fb0" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.178634 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.181439 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bc5l2"] Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.192695 4758 scope.go:117] "RemoveContainer" containerID="9f60c8fda6e7e66a9633ce66ac886aa456dd7fafe83d70de6130a7dc9a89f0d9" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.281440 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.281699 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w9whf" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" containerID="cri-o://e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" gracePeriod=2 Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.619630 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.716813 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") pod \"498010c8-fcda-4462-864d-88d7f70c2d54\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.716885 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") pod \"498010c8-fcda-4462-864d-88d7f70c2d54\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.716914 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") pod \"498010c8-fcda-4462-864d-88d7f70c2d54\" (UID: \"498010c8-fcda-4462-864d-88d7f70c2d54\") " Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.717668 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities" (OuterVolumeSpecName: "utilities") pod "498010c8-fcda-4462-864d-88d7f70c2d54" (UID: "498010c8-fcda-4462-864d-88d7f70c2d54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.723634 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s" (OuterVolumeSpecName: "kube-api-access-glx9s") pod "498010c8-fcda-4462-864d-88d7f70c2d54" (UID: "498010c8-fcda-4462-864d-88d7f70c2d54"). InnerVolumeSpecName "kube-api-access-glx9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.818542 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.818574 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glx9s\" (UniqueName: \"kubernetes.io/projected/498010c8-fcda-4462-864d-88d7f70c2d54-kube-api-access-glx9s\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.861507 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "498010c8-fcda-4462-864d-88d7f70c2d54" (UID: "498010c8-fcda-4462-864d-88d7f70c2d54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:33:42 crc kubenswrapper[4758]: I0130 08:33:42.920008 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/498010c8-fcda-4462-864d-88d7f70c2d54-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149395 4758 generic.go:334] "Generic (PLEG): container finished" podID="498010c8-fcda-4462-864d-88d7f70c2d54" containerID="e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" exitCode=0 Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149438 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerDied","Data":"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2"} Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149470 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9whf" event={"ID":"498010c8-fcda-4462-864d-88d7f70c2d54","Type":"ContainerDied","Data":"3ca3f01b986f56ea3541c35403f896246a5783f95ef8ac463437f4a2145f8bd8"} Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149492 4758 scope.go:117] "RemoveContainer" containerID="e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.149497 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9whf" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.174656 4758 scope.go:117] "RemoveContainer" containerID="ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.188265 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.191063 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w9whf"] Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.201252 4758 scope.go:117] "RemoveContainer" containerID="500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.226549 4758 scope.go:117] "RemoveContainer" containerID="e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" Jan 30 08:33:43 crc kubenswrapper[4758]: E0130 08:33:43.226946 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2\": container with ID starting with e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2 not found: ID does not exist" containerID="e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.226978 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2"} err="failed to get container status \"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2\": rpc error: code = NotFound desc = could not find container \"e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2\": container with ID starting with e2d2d20209d1fa9e50922bed31662ca0d0bf35514599ed844b6337a3125037f2 not found: ID does not exist" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.226997 4758 scope.go:117] "RemoveContainer" containerID="ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0" Jan 30 08:33:43 crc kubenswrapper[4758]: E0130 08:33:43.227391 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0\": container with ID starting with ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0 not found: ID does not exist" containerID="ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.227434 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0"} err="failed to get container status \"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0\": rpc error: code = NotFound desc = could not find container \"ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0\": container with ID starting with ef0b9c8efdfaa41fe29893f022a7af7888dd6cfb131affcb4578017f66ef12a0 not found: ID does not exist" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.227487 4758 scope.go:117] "RemoveContainer" containerID="500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336" Jan 30 08:33:43 crc kubenswrapper[4758]: E0130 08:33:43.227876 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336\": container with ID starting with 500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336 not found: ID does not exist" containerID="500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.227924 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336"} err="failed to get container status \"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336\": rpc error: code = NotFound desc = could not find container \"500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336\": container with ID starting with 500c6921d0cc3ba34f4720d28e2540fe7cd0423ac0c62b9d582ddf5508cf2336 not found: ID does not exist" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.776934 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" path="/var/lib/kubelet/pods/3a74af45-ed4e-4d30-b686-942663e223c6/volumes" Jan 30 08:33:43 crc kubenswrapper[4758]: I0130 08:33:43.777537 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" path="/var/lib/kubelet/pods/498010c8-fcda-4462-864d-88d7f70c2d54/volumes" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.188467 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" containerID="cri-o://0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" gracePeriod=15 Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.586387 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747472 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747537 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747597 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747640 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747675 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747695 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747718 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747733 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747789 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.747826 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.748758 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.748781 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.748801 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") pod \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\" (UID: \"92757cb7-5e41-4c2d-bbdf-0e4010e4611d\") " Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.749269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.749173 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.749438 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.750724 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.751189 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.754203 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.754545 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.754874 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.755462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k" (OuterVolumeSpecName: "kube-api-access-qvl5k") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "kube-api-access-qvl5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.755693 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.758621 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.761501 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.768557 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.770291 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "92757cb7-5e41-4c2d-bbdf-0e4010e4611d" (UID: "92757cb7-5e41-4c2d-bbdf-0e4010e4611d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.849985 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850024 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850051 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850063 4758 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850073 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850082 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvl5k\" (UniqueName: \"kubernetes.io/projected/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-kube-api-access-qvl5k\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850092 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850100 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850109 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850120 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850130 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850139 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850147 4758 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:50 crc kubenswrapper[4758]: I0130 08:33:50.850155 4758 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92757cb7-5e41-4c2d-bbdf-0e4010e4611d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203585 4758 generic.go:334] "Generic (PLEG): container finished" podID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerID="0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" exitCode=0 Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" event={"ID":"92757cb7-5e41-4c2d-bbdf-0e4010e4611d","Type":"ContainerDied","Data":"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34"} Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203662 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203681 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ssdl5" event={"ID":"92757cb7-5e41-4c2d-bbdf-0e4010e4611d","Type":"ContainerDied","Data":"53b5888c06d75276ec1154090765c765987de449f501b8cfa950546c58f4dcb5"} Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.203705 4758 scope.go:117] "RemoveContainer" containerID="0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.223724 4758 scope.go:117] "RemoveContainer" containerID="0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" Jan 30 08:33:51 crc kubenswrapper[4758]: E0130 08:33:51.224106 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34\": container with ID starting with 0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34 not found: ID does not exist" containerID="0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.224136 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34"} err="failed to get container status \"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34\": rpc error: code = NotFound desc = could not find container \"0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34\": container with ID starting with 0d903f2cbc989ab72696b1b219cdcc5c69125f364bbf3eabfbfd7fcf6b8ddf34 not found: ID does not exist" Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.251714 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.255294 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ssdl5"] Jan 30 08:33:51 crc kubenswrapper[4758]: I0130 08:33:51.779636 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" path="/var/lib/kubelet/pods/92757cb7-5e41-4c2d-bbdf-0e4010e4611d/volumes" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.960636 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-649d76d5b4-rjx9k"] Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961773 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961802 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961847 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961862 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961875 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961891 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961905 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961925 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961939 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961958 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.961975 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.961996 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962013 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="extract-utilities" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962074 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962096 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962122 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962139 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962157 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962171 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962191 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962203 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962221 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962234 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: E0130 08:33:59.962254 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962266 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="extract-content" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962443 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="498010c8-fcda-4462-864d-88d7f70c2d54" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962463 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a74af45-ed4e-4d30-b686-942663e223c6" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962480 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ea0fe7-14a8-4194-abfe-dfc8634b8acf" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962509 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d332c4-fa91-4d24-9561-1b68c12a8224" containerName="registry-server" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.962529 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="92757cb7-5e41-4c2d-bbdf-0e4010e4611d" containerName="oauth-openshift" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.963152 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.966887 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.967139 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.972171 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.972188 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.972285 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.972296 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973015 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973376 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973396 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973425 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.973452 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.974415 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977268 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977374 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-dir\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977452 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977553 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977593 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977781 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.977957 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978100 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5xjm\" (UniqueName: \"kubernetes.io/projected/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-kube-api-access-d5xjm\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978170 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978307 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-policies\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978344 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.978369 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.991985 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:33:59 crc kubenswrapper[4758]: I0130 08:33:59.995502 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.004920 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-649d76d5b4-rjx9k"] Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.010828 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079610 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079696 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079742 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-policies\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079767 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079794 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079836 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079855 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-dir\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079886 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079904 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079964 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.079990 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.080023 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.080070 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5xjm\" (UniqueName: \"kubernetes.io/projected/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-kube-api-access-d5xjm\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.082008 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-service-ca\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.082913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-cliconfig\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.083200 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.084897 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-policies\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.085870 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-audit-dir\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.088681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.089165 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-session\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.090509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.091805 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.092882 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-serving-cert\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.093570 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-error\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.096023 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-system-router-certs\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.097179 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-v4-0-config-user-template-login\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.101498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5xjm\" (UniqueName: \"kubernetes.io/projected/0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017-kube-api-access-d5xjm\") pod \"oauth-openshift-649d76d5b4-rjx9k\" (UID: \"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017\") " pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.304186 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:00 crc kubenswrapper[4758]: I0130 08:34:00.523633 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-649d76d5b4-rjx9k"] Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.266103 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" event={"ID":"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017","Type":"ContainerStarted","Data":"b77e2aabb71f998e385b8e95b1fc20acfd490d54a3d696f84af9f428b41cdab8"} Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.266143 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" event={"ID":"0d2b61ca-f3a1-4a59-ab2a-dc143c2cf017","Type":"ContainerStarted","Data":"f8c27f1a34b6a31d3c73c2ff171a9b101be127e08bc5a7e6c51bc6b8fcf504ac"} Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.267345 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.277022 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" Jan 30 08:34:01 crc kubenswrapper[4758]: I0130 08:34:01.307624 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-649d76d5b4-rjx9k" podStartSLOduration=36.307591232 podStartE2EDuration="36.307591232s" podCreationTimestamp="2026-01-30 08:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:34:01.299316183 +0000 UTC m=+246.271627774" watchObservedRunningTime="2026-01-30 08:34:01.307591232 +0000 UTC m=+246.279902783" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.560997 4758 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.561834 4758 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562054 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562205 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562521 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562570 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562675 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.562600 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0" gracePeriod=15 Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.563910 4758 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564049 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564060 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564069 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564075 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564083 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564090 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564101 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564106 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564113 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564118 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564128 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564134 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.564141 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564147 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564266 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564276 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564283 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564294 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564302 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.564311 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.617209 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.617579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.617770 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.617935 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.618144 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.618321 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.618522 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.618792 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720329 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720365 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720394 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720387 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720482 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720465 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720495 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720538 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720571 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720679 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: I0130 08:34:02.720768 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:02 crc kubenswrapper[4758]: E0130 08:34:02.814806 4758 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" volumeName="registry-storage" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.278936 4758 generic.go:334] "Generic (PLEG): container finished" podID="05bc5468-8b30-42ea-a229-dba54dddcdaf" containerID="680d53edcf23e988cb0b58dfc2997e729a64c4eace90175659a7caa821078c76" exitCode=0 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.279023 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05bc5468-8b30-42ea-a229-dba54dddcdaf","Type":"ContainerDied","Data":"680d53edcf23e988cb0b58dfc2997e729a64c4eace90175659a7caa821078c76"} Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.279992 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.282464 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.282601 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.284598 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286006 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4" exitCode=0 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286037 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7" exitCode=0 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286050 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0" exitCode=0 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286080 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405" exitCode=2 Jan 30 08:34:03 crc kubenswrapper[4758]: I0130 08:34:03.286138 4758 scope.go:117] "RemoveContainer" containerID="0dcf202ff2e6c2e85508e2af6ee5468a2eec9d9854662ce5ee2208ab0559b38f" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.293177 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.482931 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.483675 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540610 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") pod \"05bc5468-8b30-42ea-a229-dba54dddcdaf\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540681 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") pod \"05bc5468-8b30-42ea-a229-dba54dddcdaf\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540725 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") pod \"05bc5468-8b30-42ea-a229-dba54dddcdaf\" (UID: \"05bc5468-8b30-42ea-a229-dba54dddcdaf\") " Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540825 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05bc5468-8b30-42ea-a229-dba54dddcdaf" (UID: "05bc5468-8b30-42ea-a229-dba54dddcdaf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540839 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock" (OuterVolumeSpecName: "var-lock") pod "05bc5468-8b30-42ea-a229-dba54dddcdaf" (UID: "05bc5468-8b30-42ea-a229-dba54dddcdaf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540964 4758 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.540978 4758 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/05bc5468-8b30-42ea-a229-dba54dddcdaf-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.547268 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05bc5468-8b30-42ea-a229-dba54dddcdaf" (UID: "05bc5468-8b30-42ea-a229-dba54dddcdaf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:34:04 crc kubenswrapper[4758]: I0130 08:34:04.650514 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05bc5468-8b30-42ea-a229-dba54dddcdaf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.300603 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"05bc5468-8b30-42ea-a229-dba54dddcdaf","Type":"ContainerDied","Data":"f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68"} Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.300938 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8494fe1b7bf7fd3655a15c8a2966956d30c2d2ed7f835a5e513367b2ae3ac68" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.300629 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.302941 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.303879 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4" exitCode=0 Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.313161 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.454919 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.455493 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.455996 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.456483 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.457032 4758 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.457127 4758 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.457569 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="200ms" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.611209 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.611776 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.612216 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.612445 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: E0130 08:34:05.658633 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="400ms" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661381 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661459 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661490 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661596 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661613 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661711 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661956 4758 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661985 4758 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.661998 4758 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.772909 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.773300 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:05 crc kubenswrapper[4758]: I0130 08:34:05.775302 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 08:34:06 crc kubenswrapper[4758]: E0130 08:34:06.059251 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="800ms" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.311427 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.313005 4758 scope.go:117] "RemoveContainer" containerID="11f4cd453a5583c3d8b0c805dd7a8d6c09ad6430d936849f8c7c0d841bd2ede4" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.313078 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.314017 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.314448 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.317368 4758 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.317726 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.327838 4758 scope.go:117] "RemoveContainer" containerID="d54cb1545127dc037b434cad85c988757dc6e13629d0b4666a53e8cacffb97e7" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.338595 4758 scope.go:117] "RemoveContainer" containerID="52629eb5fe57d1502b950ec1c0bbfceb006874a9232cae4fbb71f1bf9e4056a0" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.349266 4758 scope.go:117] "RemoveContainer" containerID="a1ec61f8a45304f946bc373d404868563acd11f7af848f824ab7fa9151d53405" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.360114 4758 scope.go:117] "RemoveContainer" containerID="31a213c6cd8ed626840246b3cf68abc9bbf413174ff484c2afb7b4b8839300a4" Jan 30 08:34:06 crc kubenswrapper[4758]: I0130 08:34:06.373409 4758 scope.go:117] "RemoveContainer" containerID="7be36bbc6dce817894bbc0b21b9da0ce1d730f6c58140f5abfdffbe0b0299c68" Jan 30 08:34:06 crc kubenswrapper[4758]: E0130 08:34:06.860222 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="1.6s" Jan 30 08:34:07 crc kubenswrapper[4758]: E0130 08:34:07.598898 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:07 crc kubenswrapper[4758]: I0130 08:34:07.599314 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:07 crc kubenswrapper[4758]: E0130 08:34:07.631581 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.176:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f7534a9fb1014 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:34:07.630446612 +0000 UTC m=+252.602758163,LastTimestamp:2026-01-30 08:34:07.630446612 +0000 UTC m=+252.602758163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:34:08 crc kubenswrapper[4758]: I0130 08:34:08.327005 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757"} Jan 30 08:34:08 crc kubenswrapper[4758]: I0130 08:34:08.327980 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"577461f943558d7111a8423f0be9e192864880a8d1c7ad568b2ed286e5531fb6"} Jan 30 08:34:08 crc kubenswrapper[4758]: I0130 08:34:08.328589 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:08 crc kubenswrapper[4758]: E0130 08:34:08.328594 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:08 crc kubenswrapper[4758]: E0130 08:34:08.461218 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="3.2s" Jan 30 08:34:08 crc kubenswrapper[4758]: E0130 08:34:08.663655 4758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.176:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f7534a9fb1014 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 08:34:07.630446612 +0000 UTC m=+252.602758163,LastTimestamp:2026-01-30 08:34:07.630446612 +0000 UTC m=+252.602758163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 08:34:09 crc kubenswrapper[4758]: E0130 08:34:09.333453 4758 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:11 crc kubenswrapper[4758]: E0130 08:34:11.662583 4758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="6.4s" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.365480 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.366020 4758 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208" exitCode=1 Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.366087 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208"} Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.366620 4758 scope.go:117] "RemoveContainer" containerID="dfd3a22f022363eb304fc6c1c606e8a529478e6c375678d8fae123c42f403208" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.366900 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.367541 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.771664 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:15 crc kubenswrapper[4758]: I0130 08:34:15.772474 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.037289 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.374879 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.374949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"928ac18f142e91b14f4323406959a98f45ca48e134f7d6554009452fa582685f"} Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.376308 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.376672 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.492758 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:34:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:34:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:34:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T08:34:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.493220 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.493672 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.494087 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.494342 4758 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.494385 4758 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.767768 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.770272 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.771122 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.784267 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.784311 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:16 crc kubenswrapper[4758]: E0130 08:34:16.784933 4758 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:16 crc kubenswrapper[4758]: I0130 08:34:16.785444 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:16 crc kubenswrapper[4758]: W0130 08:34:16.807565 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1a31a90a6f7b84f95c028b4305344f3bc0a1f5b7fc7f1771cc999d07fcf7a347 WatchSource:0}: Error finding container 1a31a90a6f7b84f95c028b4305344f3bc0a1f5b7fc7f1771cc999d07fcf7a347: Status 404 returned error can't find the container with id 1a31a90a6f7b84f95c028b4305344f3bc0a1f5b7fc7f1771cc999d07fcf7a347 Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381144 4758 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="994db0503d06c8a0e6c1755ffca83d8f3b773cc2fb92c1a4849df5db46270e6d" exitCode=0 Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381288 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"994db0503d06c8a0e6c1755ffca83d8f3b773cc2fb92c1a4849df5db46270e6d"} Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381620 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1a31a90a6f7b84f95c028b4305344f3bc0a1f5b7fc7f1771cc999d07fcf7a347"} Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381915 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.381932 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.382852 4758 status_manager.go:851] "Failed to get status for pod" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:17 crc kubenswrapper[4758]: E0130 08:34:17.382941 4758 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:17 crc kubenswrapper[4758]: I0130 08:34:17.383194 4758 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.176:6443: connect: connection refused" Jan 30 08:34:18 crc kubenswrapper[4758]: I0130 08:34:18.397733 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d3b6063398ec7c6205189ec55df2092e5a6b3bd041e8e05e4a8f7d797007069e"} Jan 30 08:34:18 crc kubenswrapper[4758]: I0130 08:34:18.398134 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"11699d01ad01c89bf19be1f0e389482dd270044fa8b15b80ab0e5312f3a77d04"} Jan 30 08:34:18 crc kubenswrapper[4758]: I0130 08:34:18.398150 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"76db6a6575829613d713514dfe407c3adb132e57a1a9ea32d281c2c84a2638c9"} Jan 30 08:34:18 crc kubenswrapper[4758]: I0130 08:34:18.398162 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ff2b56706c15dbf79f056f889c62678cea1f5108e6dfeab706d716de046a9891"} Jan 30 08:34:19 crc kubenswrapper[4758]: I0130 08:34:19.405867 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"50627c6b6f71291632d1ef441f903ebea9406343bb93d10caf6075404f38d442"} Jan 30 08:34:19 crc kubenswrapper[4758]: I0130 08:34:19.406157 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:19 crc kubenswrapper[4758]: I0130 08:34:19.406183 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:19 crc kubenswrapper[4758]: I0130 08:34:19.406207 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:20 crc kubenswrapper[4758]: I0130 08:34:20.638806 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:21 crc kubenswrapper[4758]: I0130 08:34:21.786485 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:21 crc kubenswrapper[4758]: I0130 08:34:21.786560 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:21 crc kubenswrapper[4758]: I0130 08:34:21.791290 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:24 crc kubenswrapper[4758]: I0130 08:34:24.420596 4758 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:25 crc kubenswrapper[4758]: I0130 08:34:25.439637 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:25 crc kubenswrapper[4758]: I0130 08:34:25.439690 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:25 crc kubenswrapper[4758]: I0130 08:34:25.444715 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:25 crc kubenswrapper[4758]: I0130 08:34:25.798949 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b1e8c927-3894-4805-9894-26461b80c7dd" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.037882 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.041733 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.444422 4758 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.444455 4758 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d88a0eb-98f3-4e2b-b076-4454822dbea7" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.448429 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b1e8c927-3894-4805-9894-26461b80c7dd" Jan 30 08:34:26 crc kubenswrapper[4758]: I0130 08:34:26.449525 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 08:34:33 crc kubenswrapper[4758]: I0130 08:34:33.890118 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 08:34:34 crc kubenswrapper[4758]: I0130 08:34:34.396638 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 08:34:34 crc kubenswrapper[4758]: I0130 08:34:34.670917 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 08:34:34 crc kubenswrapper[4758]: I0130 08:34:34.830373 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.063087 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.307924 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.312657 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.457194 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.465421 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.666622 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.721970 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 08:34:35 crc kubenswrapper[4758]: I0130 08:34:35.762549 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.135876 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.187321 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.416856 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.500511 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.666189 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 08:34:36 crc kubenswrapper[4758]: I0130 08:34:36.685335 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.099029 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.291604 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.316356 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.358525 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.386241 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.427116 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.526118 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.532694 4758 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.554847 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.608806 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.654857 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.686022 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.695559 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.886786 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 08:34:37 crc kubenswrapper[4758]: I0130 08:34:37.958793 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.027419 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.030574 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.191714 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.459506 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.587980 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.654137 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.691896 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.841257 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.910220 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.933838 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 08:34:38 crc kubenswrapper[4758]: I0130 08:34:38.985710 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.018421 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.023618 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.045490 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.098657 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.139785 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.142672 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.211146 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.225497 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.358540 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.397475 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.483285 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.521482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.597768 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.671290 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.708988 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.732717 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.750505 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.756380 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.843561 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.878162 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.917758 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 08:34:39 crc kubenswrapper[4758]: I0130 08:34:39.944905 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.046783 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.142239 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.172390 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.219086 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.225124 4758 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.239539 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.239766 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.300832 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.361235 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.380934 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.466773 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.483197 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.512409 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.754773 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.757629 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.845950 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 08:34:40 crc kubenswrapper[4758]: I0130 08:34:40.925115 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.058271 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.102488 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.246351 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.246715 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.302108 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.450493 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.511552 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.516743 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.553787 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.619521 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.711869 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.747377 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.805596 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.827818 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.968704 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 08:34:41 crc kubenswrapper[4758]: I0130 08:34:41.972419 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.052554 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.188890 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.264697 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.318754 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.342345 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.408929 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.425630 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.463995 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.466121 4758 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.478026 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.541092 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.725338 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.734672 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.754674 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.840813 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.853874 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.871686 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.954375 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 08:34:42 crc kubenswrapper[4758]: I0130 08:34:42.983598 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.003216 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.091809 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.097395 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.141367 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.202128 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.205006 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.248598 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.302570 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.334872 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.449683 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.489416 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.540225 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.616566 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.620088 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.623881 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.672479 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.689831 4758 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.717578 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.762174 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.835335 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.871687 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 08:34:43 crc kubenswrapper[4758]: I0130 08:34:43.968503 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.025400 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.080379 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.122247 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.150781 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.179498 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.386819 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.397186 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.418652 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.426112 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.522420 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.522748 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.582750 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.642119 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.683701 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.704190 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.704468 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.707593 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.723340 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.727631 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.760096 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.768538 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.823801 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.842263 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.869118 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 08:34:44 crc kubenswrapper[4758]: I0130 08:34:44.921249 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.000637 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.075456 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.135715 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.141374 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.213801 4758 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.294771 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.346371 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.393260 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.393504 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.413933 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.423435 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.464192 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.465335 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.489580 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.547662 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.560038 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.609730 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.665693 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.673244 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.714306 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.717164 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.808257 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.870933 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.954454 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 08:34:45 crc kubenswrapper[4758]: I0130 08:34:45.977314 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.007300 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.053192 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.086164 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.192825 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.220768 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.230704 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.237720 4758 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.241863 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.241908 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.247693 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.255900 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.258962 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.285997 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.285979238 podStartE2EDuration="22.285979238s" podCreationTimestamp="2026-01-30 08:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:34:46.264322292 +0000 UTC m=+291.236633843" watchObservedRunningTime="2026-01-30 08:34:46.285979238 +0000 UTC m=+291.258290789" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.407334 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.594659 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.737771 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.737872 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.860712 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.883809 4758 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.884062 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" gracePeriod=5 Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.909564 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 08:34:46 crc kubenswrapper[4758]: I0130 08:34:46.911618 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.011876 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.012467 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.051018 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.064912 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.070022 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.089679 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.106808 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.112241 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.177824 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.216091 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.218666 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.300847 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.342381 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.473494 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.475274 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.566262 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.585916 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.647270 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.700217 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.728648 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 08:34:47 crc kubenswrapper[4758]: I0130 08:34:47.741437 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.150846 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.152756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.265888 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.365609 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.381930 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.489568 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.498564 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.626871 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.666417 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.758668 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.769283 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 08:34:48 crc kubenswrapper[4758]: I0130 08:34:48.814605 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.031216 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.037616 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.084594 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.687903 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 08:34:49 crc kubenswrapper[4758]: I0130 08:34:49.827479 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.050574 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.219461 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.271504 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.387411 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.586665 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 08:34:50 crc kubenswrapper[4758]: I0130 08:34:50.878201 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 08:34:51 crc kubenswrapper[4758]: I0130 08:34:51.661279 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.459886 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.460223 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490492 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490586 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490617 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490639 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490712 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490715 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490735 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.490766 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.491029 4758 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.491086 4758 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.491101 4758 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.491111 4758 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.497443 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.585603 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.585663 4758 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" exitCode=137 Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.585715 4758 scope.go:117] "RemoveContainer" containerID="8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.585761 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.592997 4758 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.617176 4758 scope.go:117] "RemoveContainer" containerID="8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" Jan 30 08:34:52 crc kubenswrapper[4758]: E0130 08:34:52.617670 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757\": container with ID starting with 8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757 not found: ID does not exist" containerID="8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.617727 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757"} err="failed to get container status \"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757\": rpc error: code = NotFound desc = could not find container \"8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757\": container with ID starting with 8f033956055573255cb6a1c23242e75c962dc435c2ed84bed3698862f0686757 not found: ID does not exist" Jan 30 08:34:52 crc kubenswrapper[4758]: I0130 08:34:52.756735 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 08:34:53 crc kubenswrapper[4758]: I0130 08:34:53.774855 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 08:34:55 crc kubenswrapper[4758]: I0130 08:34:55.533916 4758 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.658280 4758 generic.go:334] "Generic (PLEG): container finished" podID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerID="00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e" exitCode=0 Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.658351 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerDied","Data":"00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e"} Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.659378 4758 scope.go:117] "RemoveContainer" containerID="00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e" Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.915419 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:35:06 crc kubenswrapper[4758]: I0130 08:35:06.916013 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:35:07 crc kubenswrapper[4758]: I0130 08:35:07.667818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerStarted","Data":"9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b"} Jan 30 08:35:07 crc kubenswrapper[4758]: I0130 08:35:07.668511 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:35:07 crc kubenswrapper[4758]: I0130 08:35:07.672221 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.006723 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.007521 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" containerID="cri-o://483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" gracePeriod=30 Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.112936 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.113155 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" containerID="cri-o://f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" gracePeriod=30 Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.343744 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.420695 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543145 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543210 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") pod \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543261 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543346 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") pod \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543395 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543458 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543533 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") pod \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\" (UID: \"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543577 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") pod \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.543617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") pod \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\" (UID: \"31897db9-9dd1-42a9-8eae-b5e13e113a3c\") " Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544109 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca" (OuterVolumeSpecName: "client-ca") pod "31897db9-9dd1-42a9-8eae-b5e13e113a3c" (UID: "31897db9-9dd1-42a9-8eae-b5e13e113a3c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544195 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config" (OuterVolumeSpecName: "config") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca" (OuterVolumeSpecName: "client-ca") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544748 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.544882 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config" (OuterVolumeSpecName: "config") pod "31897db9-9dd1-42a9-8eae-b5e13e113a3c" (UID: "31897db9-9dd1-42a9-8eae-b5e13e113a3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.549275 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc" (OuterVolumeSpecName: "kube-api-access-gg4sc") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "kube-api-access-gg4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.549280 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "31897db9-9dd1-42a9-8eae-b5e13e113a3c" (UID: "31897db9-9dd1-42a9-8eae-b5e13e113a3c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.549350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg" (OuterVolumeSpecName: "kube-api-access-m6shg") pod "31897db9-9dd1-42a9-8eae-b5e13e113a3c" (UID: "31897db9-9dd1-42a9-8eae-b5e13e113a3c"). InnerVolumeSpecName "kube-api-access-m6shg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.550352 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" (UID: "deddb42b-bfb7-4c61-af8d-f339f8d4ca4a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645343 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645397 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645421 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645447 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4sc\" (UniqueName: \"kubernetes.io/projected/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-kube-api-access-gg4sc\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645466 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31897db9-9dd1-42a9-8eae-b5e13e113a3c-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645483 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6shg\" (UniqueName: \"kubernetes.io/projected/31897db9-9dd1-42a9-8eae-b5e13e113a3c-kube-api-access-m6shg\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645497 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645555 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31897db9-9dd1-42a9-8eae-b5e13e113a3c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.645570 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.841983 4758 generic.go:334] "Generic (PLEG): container finished" podID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerID="483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" exitCode=0 Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.842076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" event={"ID":"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a","Type":"ContainerDied","Data":"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d"} Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.842160 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" event={"ID":"deddb42b-bfb7-4c61-af8d-f339f8d4ca4a","Type":"ContainerDied","Data":"1a4096fd26f7cd35b668bd3b74bb0a13df67390a721f7d3ce8bb74a60052f47a"} Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.842157 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9kt48" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.842223 4758 scope.go:117] "RemoveContainer" containerID="483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.845892 4758 generic.go:334] "Generic (PLEG): container finished" podID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerID="f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" exitCode=0 Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.845942 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" event={"ID":"31897db9-9dd1-42a9-8eae-b5e13e113a3c","Type":"ContainerDied","Data":"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83"} Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.845969 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" event={"ID":"31897db9-9dd1-42a9-8eae-b5e13e113a3c","Type":"ContainerDied","Data":"c8e9069e7ec2c171424e1bd93a4b8f9855a2158a2e3b30930d28fb757cb65678"} Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.846017 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.865273 4758 scope.go:117] "RemoveContainer" containerID="483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" Jan 30 08:35:42 crc kubenswrapper[4758]: E0130 08:35:42.865689 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d\": container with ID starting with 483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d not found: ID does not exist" containerID="483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.865786 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d"} err="failed to get container status \"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d\": rpc error: code = NotFound desc = could not find container \"483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d\": container with ID starting with 483bc2fbea1efb44fd01643184f1ed8be6eaa1d6a349eb32880b101d08cb379d not found: ID does not exist" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.865877 4758 scope.go:117] "RemoveContainer" containerID="f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.879198 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.883476 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9kt48"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.895925 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.899078 4758 scope.go:117] "RemoveContainer" containerID="f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" Jan 30 08:35:42 crc kubenswrapper[4758]: E0130 08:35:42.899916 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83\": container with ID starting with f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83 not found: ID does not exist" containerID="f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.899952 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83"} err="failed to get container status \"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83\": rpc error: code = NotFound desc = could not find container \"f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83\": container with ID starting with f53477df997a8a83889f0947a897941a415da67c7ec239318357032fb3d73e83 not found: ID does not exist" Jan 30 08:35:42 crc kubenswrapper[4758]: I0130 08:35:42.900114 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n7zk7"] Jan 30 08:35:43 crc kubenswrapper[4758]: I0130 08:35:43.776144 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" path="/var/lib/kubelet/pods/31897db9-9dd1-42a9-8eae-b5e13e113a3c/volumes" Jan 30 08:35:43 crc kubenswrapper[4758]: I0130 08:35:43.777305 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" path="/var/lib/kubelet/pods/deddb42b-bfb7-4c61-af8d-f339f8d4ca4a/volumes" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038527 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:44 crc kubenswrapper[4758]: E0130 08:35:44.038784 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038798 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: E0130 08:35:44.038810 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038816 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:35:44 crc kubenswrapper[4758]: E0130 08:35:44.038830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038837 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: E0130 08:35:44.038853 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" containerName="installer" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038860 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" containerName="installer" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038973 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bc5468-8b30-42ea-a229-dba54dddcdaf" containerName="installer" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038981 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="deddb42b-bfb7-4c61-af8d-f339f8d4ca4a" containerName="controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038988 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.038997 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="31897db9-9dd1-42a9-8eae-b5e13e113a3c" containerName="route-controller-manager" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.039412 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.041365 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.041837 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.042058 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.042290 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.044308 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.046136 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.049684 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.051365 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.054719 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.054922 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.055051 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.055351 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.055669 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.056136 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.056654 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.056979 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.062996 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162510 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162561 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162583 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162603 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162625 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162652 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162672 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162695 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.162719 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.263958 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264026 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264113 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264143 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264185 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264260 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264299 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264352 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.264404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.265386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.265492 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.265498 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.265929 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.266913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.269528 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.278664 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.285889 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") pod \"route-controller-manager-5947dff4b8-nrbxp\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.286173 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") pod \"controller-manager-6d8b7cb844-l2q5v\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.387924 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.396879 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.589895 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.627153 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:44 crc kubenswrapper[4758]: W0130 08:35:44.635544 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ef3df9a_441b_45c6_ae42_22bf2ced1596.slice/crio-cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4 WatchSource:0}: Error finding container cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4: Status 404 returned error can't find the container with id cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4 Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.857401 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" event={"ID":"2ef3df9a-441b-45c6-ae42-22bf2ced1596","Type":"ContainerStarted","Data":"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368"} Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.857709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" event={"ID":"2ef3df9a-441b-45c6-ae42-22bf2ced1596","Type":"ContainerStarted","Data":"cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4"} Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.858024 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.858753 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" event={"ID":"19653811-dadf-4512-9f60-60cd3fcb9dc3","Type":"ContainerStarted","Data":"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee"} Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.858771 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" event={"ID":"19653811-dadf-4512-9f60-60cd3fcb9dc3","Type":"ContainerStarted","Data":"c4d0e2b649449ef67d140345bed4ab7b2439ed578b46e3a77eefc092f8031eec"} Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.858985 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.865911 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:44 crc kubenswrapper[4758]: I0130 08:35:44.898891 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" podStartSLOduration=2.898869485 podStartE2EDuration="2.898869485s" podCreationTimestamp="2026-01-30 08:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:35:44.883703796 +0000 UTC m=+349.856015337" watchObservedRunningTime="2026-01-30 08:35:44.898869485 +0000 UTC m=+349.871181046" Jan 30 08:35:45 crc kubenswrapper[4758]: I0130 08:35:45.134723 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:45 crc kubenswrapper[4758]: I0130 08:35:45.154588 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" podStartSLOduration=3.154568576 podStartE2EDuration="3.154568576s" podCreationTimestamp="2026-01-30 08:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:35:44.94259006 +0000 UTC m=+349.914901621" watchObservedRunningTime="2026-01-30 08:35:45.154568576 +0000 UTC m=+350.126880127" Jan 30 08:35:48 crc kubenswrapper[4758]: I0130 08:35:48.833007 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:48 crc kubenswrapper[4758]: I0130 08:35:48.834330 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerName="controller-manager" containerID="cri-o://0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" gracePeriod=30 Jan 30 08:35:48 crc kubenswrapper[4758]: I0130 08:35:48.861411 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:48 crc kubenswrapper[4758]: I0130 08:35:48.861591 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerName="route-controller-manager" containerID="cri-o://d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" gracePeriod=30 Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.332934 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.381970 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.438503 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") pod \"19653811-dadf-4512-9f60-60cd3fcb9dc3\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.439307 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440171 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440220 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") pod \"19653811-dadf-4512-9f60-60cd3fcb9dc3\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440319 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440356 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") pod \"19653811-dadf-4512-9f60-60cd3fcb9dc3\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440386 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") pod \"19653811-dadf-4512-9f60-60cd3fcb9dc3\" (UID: \"19653811-dadf-4512-9f60-60cd3fcb9dc3\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440539 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") pod \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\" (UID: \"2ef3df9a-441b-45c6-ae42-22bf2ced1596\") " Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.439244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca" (OuterVolumeSpecName: "client-ca") pod "19653811-dadf-4512-9f60-60cd3fcb9dc3" (UID: "19653811-dadf-4512-9f60-60cd3fcb9dc3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440948 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config" (OuterVolumeSpecName: "config") pod "19653811-dadf-4512-9f60-60cd3fcb9dc3" (UID: "19653811-dadf-4512-9f60-60cd3fcb9dc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.440968 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.441864 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.441932 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config" (OuterVolumeSpecName: "config") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.446667 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv" (OuterVolumeSpecName: "kube-api-access-4nbnv") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "kube-api-access-4nbnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.450618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd" (OuterVolumeSpecName: "kube-api-access-ddtrd") pod "19653811-dadf-4512-9f60-60cd3fcb9dc3" (UID: "19653811-dadf-4512-9f60-60cd3fcb9dc3"). InnerVolumeSpecName "kube-api-access-ddtrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.450633 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "19653811-dadf-4512-9f60-60cd3fcb9dc3" (UID: "19653811-dadf-4512-9f60-60cd3fcb9dc3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.450707 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ef3df9a-441b-45c6-ae42-22bf2ced1596" (UID: "2ef3df9a-441b-45c6-ae42-22bf2ced1596"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542090 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542742 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef3df9a-441b-45c6-ae42-22bf2ced1596-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542818 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542877 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nbnv\" (UniqueName: \"kubernetes.io/projected/2ef3df9a-441b-45c6-ae42-22bf2ced1596-kube-api-access-4nbnv\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.542944 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.543009 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19653811-dadf-4512-9f60-60cd3fcb9dc3-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.543108 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ef3df9a-441b-45c6-ae42-22bf2ced1596-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.543174 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddtrd\" (UniqueName: \"kubernetes.io/projected/19653811-dadf-4512-9f60-60cd3fcb9dc3-kube-api-access-ddtrd\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.543233 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19653811-dadf-4512-9f60-60cd3fcb9dc3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.884815 4758 generic.go:334] "Generic (PLEG): container finished" podID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerID="0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" exitCode=0 Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.884906 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" event={"ID":"2ef3df9a-441b-45c6-ae42-22bf2ced1596","Type":"ContainerDied","Data":"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368"} Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.884961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" event={"ID":"2ef3df9a-441b-45c6-ae42-22bf2ced1596","Type":"ContainerDied","Data":"cb8e484918da6d053e525eab6cca1fdc7e6d78333b6e1bd25eabb42c722a0bd4"} Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.884981 4758 scope.go:117] "RemoveContainer" containerID="0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.885291 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.901906 4758 generic.go:334] "Generic (PLEG): container finished" podID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerID="d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" exitCode=0 Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.901956 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" event={"ID":"19653811-dadf-4512-9f60-60cd3fcb9dc3","Type":"ContainerDied","Data":"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee"} Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.901981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" event={"ID":"19653811-dadf-4512-9f60-60cd3fcb9dc3","Type":"ContainerDied","Data":"c4d0e2b649449ef67d140345bed4ab7b2439ed578b46e3a77eefc092f8031eec"} Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.902124 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.903113 4758 scope.go:117] "RemoveContainer" containerID="0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" Jan 30 08:35:49 crc kubenswrapper[4758]: E0130 08:35:49.903535 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368\": container with ID starting with 0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368 not found: ID does not exist" containerID="0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.903619 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368"} err="failed to get container status \"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368\": rpc error: code = NotFound desc = could not find container \"0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368\": container with ID starting with 0c0023993a0651d3b38093fe5fffa828afafcbd8bfe8be52cd30f331ddcb8368 not found: ID does not exist" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.903638 4758 scope.go:117] "RemoveContainer" containerID="d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.925015 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.925153 4758 scope.go:117] "RemoveContainer" containerID="d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" Jan 30 08:35:49 crc kubenswrapper[4758]: E0130 08:35:49.925744 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee\": container with ID starting with d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee not found: ID does not exist" containerID="d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.925773 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee"} err="failed to get container status \"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee\": rpc error: code = NotFound desc = could not find container \"d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee\": container with ID starting with d8c93085d88bd37ac0e1f08add02e0dfa98eec7c2ff7f170090bfc7447f8d7ee not found: ID does not exist" Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.929076 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d8b7cb844-l2q5v"] Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.932414 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:49 crc kubenswrapper[4758]: I0130 08:35:49.935810 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5947dff4b8-nrbxp"] Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044105 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:35:50 crc kubenswrapper[4758]: E0130 08:35:50.044323 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerName="route-controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044334 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerName="route-controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: E0130 08:35:50.044348 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerName="controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044353 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerName="controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044469 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" containerName="route-controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044483 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" containerName="controller-manager" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.044914 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.048148 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050555 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050619 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050657 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050683 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.050714 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.055806 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.064577 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.064604 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.064790 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.064924 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.065159 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.068884 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152488 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152578 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152665 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.152708 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.153687 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.155350 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.155762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.163244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.169146 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") pod \"controller-manager-86489848d5-jxkdj\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.368273 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.572855 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.909114 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" event={"ID":"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5","Type":"ContainerStarted","Data":"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6"} Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.909174 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" event={"ID":"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5","Type":"ContainerStarted","Data":"952f87f394f721284daffcec11f065bb01a614ab3053badb7c5d9e39e272998d"} Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.909190 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.924168 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:35:50 crc kubenswrapper[4758]: I0130 08:35:50.934383 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" podStartSLOduration=1.934359666 podStartE2EDuration="1.934359666s" podCreationTimestamp="2026-01-30 08:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:35:50.930023891 +0000 UTC m=+355.902335452" watchObservedRunningTime="2026-01-30 08:35:50.934359666 +0000 UTC m=+355.906671227" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.042607 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.043277 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046019 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046078 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046223 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046270 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.046443 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.047754 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.063386 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.066429 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.066622 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.066724 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.066854 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.168200 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.169554 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.170084 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.170197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.171612 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.171948 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.186156 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.186902 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") pod \"route-controller-manager-85fb6447fc-bctwg\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.356952 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.554416 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:35:51 crc kubenswrapper[4758]: W0130 08:35:51.557704 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d487c06_79ec_4b43_9bea_781f8fb2ec82.slice/crio-e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2 WatchSource:0}: Error finding container e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2: Status 404 returned error can't find the container with id e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2 Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.780295 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19653811-dadf-4512-9f60-60cd3fcb9dc3" path="/var/lib/kubelet/pods/19653811-dadf-4512-9f60-60cd3fcb9dc3/volumes" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.781725 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef3df9a-441b-45c6-ae42-22bf2ced1596" path="/var/lib/kubelet/pods/2ef3df9a-441b-45c6-ae42-22bf2ced1596/volumes" Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.917796 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" event={"ID":"0d487c06-79ec-4b43-9bea-781f8fb2ec82","Type":"ContainerStarted","Data":"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b"} Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.917850 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" event={"ID":"0d487c06-79ec-4b43-9bea-781f8fb2ec82","Type":"ContainerStarted","Data":"e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2"} Jan 30 08:35:51 crc kubenswrapper[4758]: I0130 08:35:51.933629 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" podStartSLOduration=2.933611316 podStartE2EDuration="2.933611316s" podCreationTimestamp="2026-01-30 08:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:35:51.930097047 +0000 UTC m=+356.902408608" watchObservedRunningTime="2026-01-30 08:35:51.933611316 +0000 UTC m=+356.905922867" Jan 30 08:35:52 crc kubenswrapper[4758]: I0130 08:35:52.387881 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:35:52 crc kubenswrapper[4758]: I0130 08:35:52.387948 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:35:52 crc kubenswrapper[4758]: I0130 08:35:52.923020 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:35:52 crc kubenswrapper[4758]: I0130 08:35:52.926933 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.058680 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-f5m97"] Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.060295 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.076210 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-f5m97"] Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-registry-certificates\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-registry-tls\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241829 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-bound-sa-token\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241855 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-trusted-ca\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.241927 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.242391 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bae986b6-2a59-4358-a530-067256d2e6fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.242421 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6pqm\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-kube-api-access-b6pqm\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.242440 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bae986b6-2a59-4358-a530-067256d2e6fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.263587 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-bound-sa-token\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343635 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-trusted-ca\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343666 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bae986b6-2a59-4358-a530-067256d2e6fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343695 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6pqm\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-kube-api-access-b6pqm\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343727 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bae986b6-2a59-4358-a530-067256d2e6fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343782 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-registry-certificates\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.343802 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-registry-tls\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.345074 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bae986b6-2a59-4358-a530-067256d2e6fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.345132 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-trusted-ca\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.346107 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bae986b6-2a59-4358-a530-067256d2e6fe-registry-certificates\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.349553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-registry-tls\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.352292 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bae986b6-2a59-4358-a530-067256d2e6fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.369857 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-bound-sa-token\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.375083 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6pqm\" (UniqueName: \"kubernetes.io/projected/bae986b6-2a59-4358-a530-067256d2e6fe-kube-api-access-b6pqm\") pod \"image-registry-66df7c8f76-f5m97\" (UID: \"bae986b6-2a59-4358-a530-067256d2e6fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.376275 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.799557 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-f5m97"] Jan 30 08:36:04 crc kubenswrapper[4758]: W0130 08:36:04.803166 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae986b6_2a59_4358_a530_067256d2e6fe.slice/crio-c8dbc8dacaf05c6351e58feb4d3f909c4c095110a7527514e4fa35b5db7cefc0 WatchSource:0}: Error finding container c8dbc8dacaf05c6351e58feb4d3f909c4c095110a7527514e4fa35b5db7cefc0: Status 404 returned error can't find the container with id c8dbc8dacaf05c6351e58feb4d3f909c4c095110a7527514e4fa35b5db7cefc0 Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.989117 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" event={"ID":"bae986b6-2a59-4358-a530-067256d2e6fe","Type":"ContainerStarted","Data":"e90f5c4fce400ee2fd1a610fcb261ff538276f6e4db2f7776c102a815fa2829d"} Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.989170 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" event={"ID":"bae986b6-2a59-4358-a530-067256d2e6fe","Type":"ContainerStarted","Data":"c8dbc8dacaf05c6351e58feb4d3f909c4c095110a7527514e4fa35b5db7cefc0"} Jan 30 08:36:04 crc kubenswrapper[4758]: I0130 08:36:04.989284 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.037599 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" podStartSLOduration=18.03757812 podStartE2EDuration="18.03757812s" podCreationTimestamp="2026-01-30 08:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:36:05.009637146 +0000 UTC m=+369.981948697" watchObservedRunningTime="2026-01-30 08:36:22.03757812 +0000 UTC m=+387.009889691" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.040445 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.040672 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerName="controller-manager" containerID="cri-o://5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" gracePeriod=30 Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.088480 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.088774 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerName="route-controller-manager" containerID="cri-o://cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" gracePeriod=30 Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.401348 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.401958 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.792534 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.912757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") pod \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.913126 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") pod \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.913175 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") pod \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.913205 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") pod \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\" (UID: \"0d487c06-79ec-4b43-9bea-781f8fb2ec82\") " Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.913751 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d487c06-79ec-4b43-9bea-781f8fb2ec82" (UID: "0d487c06-79ec-4b43-9bea-781f8fb2ec82"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.914699 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config" (OuterVolumeSpecName: "config") pod "0d487c06-79ec-4b43-9bea-781f8fb2ec82" (UID: "0d487c06-79ec-4b43-9bea-781f8fb2ec82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.923371 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d487c06-79ec-4b43-9bea-781f8fb2ec82" (UID: "0d487c06-79ec-4b43-9bea-781f8fb2ec82"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.936412 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27" (OuterVolumeSpecName: "kube-api-access-4ct27") pod "0d487c06-79ec-4b43-9bea-781f8fb2ec82" (UID: "0d487c06-79ec-4b43-9bea-781f8fb2ec82"). InnerVolumeSpecName "kube-api-access-4ct27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:22 crc kubenswrapper[4758]: I0130 08:36:22.987923 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.014843 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.014873 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d487c06-79ec-4b43-9bea-781f8fb2ec82-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.014912 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ct27\" (UniqueName: \"kubernetes.io/projected/0d487c06-79ec-4b43-9bea-781f8fb2ec82-kube-api-access-4ct27\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.014926 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d487c06-79ec-4b43-9bea-781f8fb2ec82-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086577 4758 generic.go:334] "Generic (PLEG): container finished" podID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerID="cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" exitCode=0 Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086629 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" event={"ID":"0d487c06-79ec-4b43-9bea-781f8fb2ec82","Type":"ContainerDied","Data":"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b"} Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086654 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" event={"ID":"0d487c06-79ec-4b43-9bea-781f8fb2ec82","Type":"ContainerDied","Data":"e71e8270854b48bce6c12435853f17ac3a3eb3dd31d274b31bb292c04c914ea2"} Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086674 4758 scope.go:117] "RemoveContainer" containerID="cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.086758 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.094924 4758 generic.go:334] "Generic (PLEG): container finished" podID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerID="5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" exitCode=0 Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.094965 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" event={"ID":"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5","Type":"ContainerDied","Data":"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6"} Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.094995 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" event={"ID":"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5","Type":"ContainerDied","Data":"952f87f394f721284daffcec11f065bb01a614ab3053badb7c5d9e39e272998d"} Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.095016 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86489848d5-jxkdj" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.112755 4758 scope.go:117] "RemoveContainer" containerID="cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116305 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116387 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116425 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116572 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.116620 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") pod \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\" (UID: \"a06f4bda-c2e7-4698-afd6-8bb2cc7914d5\") " Jan 30 08:36:23 crc kubenswrapper[4758]: E0130 08:36:23.117418 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b\": container with ID starting with cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b not found: ID does not exist" containerID="cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.117477 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b"} err="failed to get container status \"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b\": rpc error: code = NotFound desc = could not find container \"cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b\": container with ID starting with cdf649206e939c7c1338c90390afcf44c7929673f1da03650d99749efd8a074b not found: ID does not exist" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.117507 4758 scope.go:117] "RemoveContainer" containerID="5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.117959 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.118230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.118794 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config" (OuterVolumeSpecName: "config") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.121142 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.121354 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.123446 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2" (OuterVolumeSpecName: "kube-api-access-2zjt2") pod "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" (UID: "a06f4bda-c2e7-4698-afd6-8bb2cc7914d5"). InnerVolumeSpecName "kube-api-access-2zjt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.132398 4758 scope.go:117] "RemoveContainer" containerID="5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" Jan 30 08:36:23 crc kubenswrapper[4758]: E0130 08:36:23.132895 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6\": container with ID starting with 5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6 not found: ID does not exist" containerID="5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.132926 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6"} err="failed to get container status \"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6\": rpc error: code = NotFound desc = could not find container \"5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6\": container with ID starting with 5665f4db19663b7ce7281d87eeccc760bd3404cf334093016cbc4ab51369dcc6 not found: ID does not exist" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.135596 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85fb6447fc-bctwg"] Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218331 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218376 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zjt2\" (UniqueName: \"kubernetes.io/projected/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-kube-api-access-2zjt2\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218389 4758 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218400 4758 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.218414 4758 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.423358 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.428497 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86489848d5-jxkdj"] Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.775333 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" path="/var/lib/kubelet/pods/0d487c06-79ec-4b43-9bea-781f8fb2ec82/volumes" Jan 30 08:36:23 crc kubenswrapper[4758]: I0130 08:36:23.776139 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" path="/var/lib/kubelet/pods/a06f4bda-c2e7-4698-afd6-8bb2cc7914d5/volumes" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067443 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-868c8b59f8-722w2"] Jan 30 08:36:24 crc kubenswrapper[4758]: E0130 08:36:24.067798 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerName="route-controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067819 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerName="route-controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: E0130 08:36:24.067834 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerName="controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067840 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerName="controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067949 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06f4bda-c2e7-4698-afd6-8bb2cc7914d5" containerName="controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.067966 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d487c06-79ec-4b43-9bea-781f8fb2ec82" containerName="route-controller-manager" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.068387 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.071610 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng"] Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072372 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072421 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072492 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072652 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072809 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.072914 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.073094 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076128 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076237 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076467 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076476 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076592 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.076705 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.080109 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.089880 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng"] Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.090328 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868c8b59f8-722w2"] Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.130474 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-serving-cert\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.130806 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e510388-48e8-4703-aa3e-ba6334438995-serving-cert\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.130905 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-client-ca\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.130962 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-proxy-ca-bundles\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131090 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6hc\" (UniqueName: \"kubernetes.io/projected/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-kube-api-access-4h6hc\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131142 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-config\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-client-ca\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131193 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lblml\" (UniqueName: \"kubernetes.io/projected/8e510388-48e8-4703-aa3e-ba6334438995-kube-api-access-lblml\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.131268 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-config\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232605 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-serving-cert\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232648 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e510388-48e8-4703-aa3e-ba6334438995-serving-cert\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232679 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-client-ca\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-proxy-ca-bundles\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232734 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h6hc\" (UniqueName: \"kubernetes.io/projected/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-kube-api-access-4h6hc\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-config\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232771 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-client-ca\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232785 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lblml\" (UniqueName: \"kubernetes.io/projected/8e510388-48e8-4703-aa3e-ba6334438995-kube-api-access-lblml\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.232812 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-config\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.234482 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-config\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.234963 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-client-ca\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.235069 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e510388-48e8-4703-aa3e-ba6334438995-proxy-ca-bundles\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.235703 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-config\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.235773 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-client-ca\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.241500 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e510388-48e8-4703-aa3e-ba6334438995-serving-cert\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.254630 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lblml\" (UniqueName: \"kubernetes.io/projected/8e510388-48e8-4703-aa3e-ba6334438995-kube-api-access-lblml\") pod \"controller-manager-868c8b59f8-722w2\" (UID: \"8e510388-48e8-4703-aa3e-ba6334438995\") " pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.255968 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-serving-cert\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.257733 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h6hc\" (UniqueName: \"kubernetes.io/projected/1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32-kube-api-access-4h6hc\") pod \"route-controller-manager-6dbf8b55b6-krvng\" (UID: \"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32\") " pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.381116 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-f5m97" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.386213 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.399560 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.470658 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:36:24 crc kubenswrapper[4758]: I0130 08:36:24.766150 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868c8b59f8-722w2"] Jan 30 08:36:24 crc kubenswrapper[4758]: W0130 08:36:24.784263 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e510388_48e8_4703_aa3e_ba6334438995.slice/crio-f34ac511801f27f5aeb5425e6cc5d2f38dadf551fec16b5230fa738e9266c3b6 WatchSource:0}: Error finding container f34ac511801f27f5aeb5425e6cc5d2f38dadf551fec16b5230fa738e9266c3b6: Status 404 returned error can't find the container with id f34ac511801f27f5aeb5425e6cc5d2f38dadf551fec16b5230fa738e9266c3b6 Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.101136 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng"] Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.110177 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" event={"ID":"8e510388-48e8-4703-aa3e-ba6334438995","Type":"ContainerStarted","Data":"d3976be3ab28fb14d221ba7e20ade69478b6b39f2dee7f9a1a1f31487e53c343"} Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.110214 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" event={"ID":"8e510388-48e8-4703-aa3e-ba6334438995","Type":"ContainerStarted","Data":"f34ac511801f27f5aeb5425e6cc5d2f38dadf551fec16b5230fa738e9266c3b6"} Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.110750 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.114212 4758 patch_prober.go:28] interesting pod/controller-manager-868c8b59f8-722w2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.114274 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" podUID="8e510388-48e8-4703-aa3e-ba6334438995" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 30 08:36:25 crc kubenswrapper[4758]: I0130 08:36:25.789702 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" podStartSLOduration=3.789681592 podStartE2EDuration="3.789681592s" podCreationTimestamp="2026-01-30 08:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:36:25.139567509 +0000 UTC m=+390.111879070" watchObservedRunningTime="2026-01-30 08:36:25.789681592 +0000 UTC m=+390.761993143" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.124434 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" event={"ID":"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32","Type":"ContainerStarted","Data":"ee0f472ad92e87130e178e4fcee8edd4fd503dd0dc0ba10bba366b42697113d4"} Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.124472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" event={"ID":"1b3d8ea7-e1da-4ad8-afb2-c9a93116ef32","Type":"ContainerStarted","Data":"b976e67b99cb919ab002e1d190e682ba660f41dafbfadba4cb86c5f7224608ba"} Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.124991 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.130609 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.131019 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-868c8b59f8-722w2" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.147862 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dbf8b55b6-krvng" podStartSLOduration=4.147835108 podStartE2EDuration="4.147835108s" podCreationTimestamp="2026-01-30 08:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:36:26.137922209 +0000 UTC m=+391.110233780" watchObservedRunningTime="2026-01-30 08:36:26.147835108 +0000 UTC m=+391.120146669" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.910555 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.911362 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xpchq" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="registry-server" containerID="cri-o://0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5" gracePeriod=30 Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.929381 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.929961 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fvrs2" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="registry-server" containerID="cri-o://b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206" gracePeriod=30 Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.939083 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.939339 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" containerID="cri-o://9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b" gracePeriod=30 Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.950240 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.950449 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x78bc" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="registry-server" containerID="cri-o://d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16" gracePeriod=30 Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.962547 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lsfmf"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.963355 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.995168 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:36:26 crc kubenswrapper[4758]: I0130 08:36:26.995571 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqdsb" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" containerID="cri-o://f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" gracePeriod=30 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.037511 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lsfmf"] Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.073392 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.073641 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh2k4\" (UniqueName: \"kubernetes.io/projected/f34a2860-1860-4032-8f5d-9278338c1b19-kube-api-access-nh2k4\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.073785 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.163794 4758 generic.go:334] "Generic (PLEG): container finished" podID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerID="b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206" exitCode=0 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.163869 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerDied","Data":"b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206"} Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.175379 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.175437 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh2k4\" (UniqueName: \"kubernetes.io/projected/f34a2860-1860-4032-8f5d-9278338c1b19-kube-api-access-nh2k4\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.175483 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.177383 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.183537 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f34a2860-1860-4032-8f5d-9278338c1b19-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.192193 4758 generic.go:334] "Generic (PLEG): container finished" podID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerID="9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b" exitCode=0 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.192275 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerDied","Data":"9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b"} Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.192307 4758 scope.go:117] "RemoveContainer" containerID="00e032b52468c18324073e8f68c53a2950d0c1e9e92eb63a8cddb5aaf6d5f40e" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.199874 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh2k4\" (UniqueName: \"kubernetes.io/projected/f34a2860-1860-4032-8f5d-9278338c1b19-kube-api-access-nh2k4\") pod \"marketplace-operator-79b997595-lsfmf\" (UID: \"f34a2860-1860-4032-8f5d-9278338c1b19\") " pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.202618 4758 generic.go:334] "Generic (PLEG): container finished" podID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerID="0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5" exitCode=0 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.202698 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerDied","Data":"0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5"} Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.204554 4758 generic.go:334] "Generic (PLEG): container finished" podID="a8160a87-6f56-4d61-a17b-8049588a293b" containerID="d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16" exitCode=0 Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.205525 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerDied","Data":"d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16"} Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.292387 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.451030 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.586904 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") pod \"49a66643-dc8c-4c84-9345-7f98676dc1d3\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.586970 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") pod \"49a66643-dc8c-4c84-9345-7f98676dc1d3\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.587097 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") pod \"49a66643-dc8c-4c84-9345-7f98676dc1d3\" (UID: \"49a66643-dc8c-4c84-9345-7f98676dc1d3\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.590061 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "49a66643-dc8c-4c84-9345-7f98676dc1d3" (UID: "49a66643-dc8c-4c84-9345-7f98676dc1d3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.590468 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "49a66643-dc8c-4c84-9345-7f98676dc1d3" (UID: "49a66643-dc8c-4c84-9345-7f98676dc1d3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.593107 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv" (OuterVolumeSpecName: "kube-api-access-97djv") pod "49a66643-dc8c-4c84-9345-7f98676dc1d3" (UID: "49a66643-dc8c-4c84-9345-7f98676dc1d3"). InnerVolumeSpecName "kube-api-access-97djv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.640570 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") pod \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689564 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") pod \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689623 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") pod \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\" (UID: \"ad53cc1a-5ef4-4e05-a996-b8e53194ef37\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689863 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97djv\" (UniqueName: \"kubernetes.io/projected/49a66643-dc8c-4c84-9345-7f98676dc1d3-kube-api-access-97djv\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689874 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.689883 4758 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49a66643-dc8c-4c84-9345-7f98676dc1d3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.690749 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities" (OuterVolumeSpecName: "utilities") pod "ad53cc1a-5ef4-4e05-a996-b8e53194ef37" (UID: "ad53cc1a-5ef4-4e05-a996-b8e53194ef37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.692604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw" (OuterVolumeSpecName: "kube-api-access-tvflw") pod "ad53cc1a-5ef4-4e05-a996-b8e53194ef37" (UID: "ad53cc1a-5ef4-4e05-a996-b8e53194ef37"). InnerVolumeSpecName "kube-api-access-tvflw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.703760 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.718423 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.735968 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.760269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad53cc1a-5ef4-4e05-a996-b8e53194ef37" (UID: "ad53cc1a-5ef4-4e05-a996-b8e53194ef37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.792762 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") pod \"a8160a87-6f56-4d61-a17b-8049588a293b\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.793273 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") pod \"a8160a87-6f56-4d61-a17b-8049588a293b\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794785 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") pod \"ee1dd099-7697-43e1-9777-b3735c8013e8\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794811 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") pod \"9959a40e-acf9-4c57-967c-bbd102964dbb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794842 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") pod \"ee1dd099-7697-43e1-9777-b3735c8013e8\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794925 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") pod \"9959a40e-acf9-4c57-967c-bbd102964dbb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794954 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") pod \"9959a40e-acf9-4c57-967c-bbd102964dbb\" (UID: \"9959a40e-acf9-4c57-967c-bbd102964dbb\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794976 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") pod \"ee1dd099-7697-43e1-9777-b3735c8013e8\" (UID: \"ee1dd099-7697-43e1-9777-b3735c8013e8\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.794993 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") pod \"a8160a87-6f56-4d61-a17b-8049588a293b\" (UID: \"a8160a87-6f56-4d61-a17b-8049588a293b\") " Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.795278 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvflw\" (UniqueName: \"kubernetes.io/projected/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-kube-api-access-tvflw\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.795303 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.795316 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad53cc1a-5ef4-4e05-a996-b8e53194ef37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.799462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities" (OuterVolumeSpecName: "utilities") pod "a8160a87-6f56-4d61-a17b-8049588a293b" (UID: "a8160a87-6f56-4d61-a17b-8049588a293b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.800562 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities" (OuterVolumeSpecName: "utilities") pod "ee1dd099-7697-43e1-9777-b3735c8013e8" (UID: "ee1dd099-7697-43e1-9777-b3735c8013e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.801161 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities" (OuterVolumeSpecName: "utilities") pod "9959a40e-acf9-4c57-967c-bbd102964dbb" (UID: "9959a40e-acf9-4c57-967c-bbd102964dbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.801741 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94" (OuterVolumeSpecName: "kube-api-access-bdh94") pod "a8160a87-6f56-4d61-a17b-8049588a293b" (UID: "a8160a87-6f56-4d61-a17b-8049588a293b"). InnerVolumeSpecName "kube-api-access-bdh94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.803479 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2" (OuterVolumeSpecName: "kube-api-access-nz8l2") pod "9959a40e-acf9-4c57-967c-bbd102964dbb" (UID: "9959a40e-acf9-4c57-967c-bbd102964dbb"). InnerVolumeSpecName "kube-api-access-nz8l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.804861 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h" (OuterVolumeSpecName: "kube-api-access-76w9h") pod "ee1dd099-7697-43e1-9777-b3735c8013e8" (UID: "ee1dd099-7697-43e1-9777-b3735c8013e8"). InnerVolumeSpecName "kube-api-access-76w9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.824643 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8160a87-6f56-4d61-a17b-8049588a293b" (UID: "a8160a87-6f56-4d61-a17b-8049588a293b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.862229 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee1dd099-7697-43e1-9777-b3735c8013e8" (UID: "ee1dd099-7697-43e1-9777-b3735c8013e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.897973 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/9959a40e-acf9-4c57-967c-bbd102964dbb-kube-api-access-nz8l2\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898010 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898026 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898054 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898067 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8160a87-6f56-4d61-a17b-8049588a293b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898079 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdh94\" (UniqueName: \"kubernetes.io/projected/a8160a87-6f56-4d61-a17b-8049588a293b-kube-api-access-bdh94\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898090 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1dd099-7697-43e1-9777-b3735c8013e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.898101 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76w9h\" (UniqueName: \"kubernetes.io/projected/ee1dd099-7697-43e1-9777-b3735c8013e8-kube-api-access-76w9h\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.949833 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lsfmf"] Jan 30 08:36:27 crc kubenswrapper[4758]: W0130 08:36:27.960882 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf34a2860_1860_4032_8f5d_9278338c1b19.slice/crio-81100c90897287a3643ba9899c3a44cde7b2b8080091d934eb42cb93d73cf7ed WatchSource:0}: Error finding container 81100c90897287a3643ba9899c3a44cde7b2b8080091d934eb42cb93d73cf7ed: Status 404 returned error can't find the container with id 81100c90897287a3643ba9899c3a44cde7b2b8080091d934eb42cb93d73cf7ed Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.989975 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9959a40e-acf9-4c57-967c-bbd102964dbb" (UID: "9959a40e-acf9-4c57-967c-bbd102964dbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:27 crc kubenswrapper[4758]: I0130 08:36:27.999416 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959a40e-acf9-4c57-967c-bbd102964dbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.210350 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" event={"ID":"f34a2860-1860-4032-8f5d-9278338c1b19","Type":"ContainerStarted","Data":"2a87ec21ebcb6880dfa8aa72af6286e3bfe6ad41b3c7b5d697ee49a2369f7c7a"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.210389 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" event={"ID":"f34a2860-1860-4032-8f5d-9278338c1b19","Type":"ContainerStarted","Data":"81100c90897287a3643ba9899c3a44cde7b2b8080091d934eb42cb93d73cf7ed"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.210844 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.212023 4758 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lsfmf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.212074 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" podUID="f34a2860-1860-4032-8f5d-9278338c1b19" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.214307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x78bc" event={"ID":"a8160a87-6f56-4d61-a17b-8049588a293b","Type":"ContainerDied","Data":"b94b087d70a109782aecba154dc0988599437def55360286dd53d57c7243892b"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.214359 4758 scope.go:117] "RemoveContainer" containerID="d35d25a07f88153e84e88d2a6f9ec5fe408fc2f157e85bba35a636862f0e3d16" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.214322 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x78bc" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.217529 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvrs2" event={"ID":"ee1dd099-7697-43e1-9777-b3735c8013e8","Type":"ContainerDied","Data":"0bb05ce1b23a66a7b1423906b5cec1cc9cdc27900f23c97851de922a49efcf1f"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.217591 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvrs2" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.219695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" event={"ID":"49a66643-dc8c-4c84-9345-7f98676dc1d3","Type":"ContainerDied","Data":"4118893c53a7f8a706c32c2a7e6e6021db4c895afd2c8cfa46828333ae413f1b"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.219835 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7mbqg" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.225076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpchq" event={"ID":"ad53cc1a-5ef4-4e05-a996-b8e53194ef37","Type":"ContainerDied","Data":"7b1a949affe965a07631839d4c9eae6592dd927e8832c885c5dddac995cd6027"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.225177 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpchq" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.231811 4758 generic.go:334] "Generic (PLEG): container finished" podID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerID="f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" exitCode=0 Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.231872 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqdsb" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.232303 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerDied","Data":"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.233196 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqdsb" event={"ID":"9959a40e-acf9-4c57-967c-bbd102964dbb","Type":"ContainerDied","Data":"2d31c2681539f71b539b22493850f84cbd075530d6103b3a81f2ee6e83bc8595"} Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.236970 4758 scope.go:117] "RemoveContainer" containerID="b6863769607a53efb0adc05b3d382f626fe18a9026de5f17efe5402505ed63f4" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.241733 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" podStartSLOduration=2.241712604 podStartE2EDuration="2.241712604s" podCreationTimestamp="2026-01-30 08:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:36:28.237572605 +0000 UTC m=+393.209884166" watchObservedRunningTime="2026-01-30 08:36:28.241712604 +0000 UTC m=+393.214024155" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.251600 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.258238 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7mbqg"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.273436 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.276095 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xpchq"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.277595 4758 scope.go:117] "RemoveContainer" containerID="15ee256dfea5816a52c886c11283e7357d33ef2c840a69e7c266baa52c35e543" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.292536 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.307142 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fvrs2"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.307511 4758 scope.go:117] "RemoveContainer" containerID="b5c5b862b9f83415c1d77938c3409a7625341cb7402a46339bcd3712ede79206" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.315642 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.319541 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x78bc"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.323077 4758 scope.go:117] "RemoveContainer" containerID="c3f68acbef392b308c2ed3b35e3acc112673547e011fad5471bd97a0db59032a" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.334767 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.337867 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqdsb"] Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.341154 4758 scope.go:117] "RemoveContainer" containerID="4fd34a8816ab9b8ac36f3ee7f4fccd351c108f7b629f6428e022613e9b4c1cfd" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.355064 4758 scope.go:117] "RemoveContainer" containerID="9289bf34b4fd965a08ae479d84fd68bacbc2b99807df3ddbee8cd5dde015a76b" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.373396 4758 scope.go:117] "RemoveContainer" containerID="0b2ec1128ae0f87aa56272475ed8df830dea7aa074761913f0222440a559a3e5" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.388988 4758 scope.go:117] "RemoveContainer" containerID="b5bdd4b46ac12fea35ffe9c6fead48f31f038e946902c61ae00ea7909af021e0" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.408251 4758 scope.go:117] "RemoveContainer" containerID="dab894bc290593f6bf4efa3b73a1fcf4864265047195960150260493f5158b78" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.429817 4758 scope.go:117] "RemoveContainer" containerID="f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.446119 4758 scope.go:117] "RemoveContainer" containerID="fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.472975 4758 scope.go:117] "RemoveContainer" containerID="08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.490557 4758 scope.go:117] "RemoveContainer" containerID="f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" Jan 30 08:36:28 crc kubenswrapper[4758]: E0130 08:36:28.491067 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929\": container with ID starting with f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929 not found: ID does not exist" containerID="f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.491128 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929"} err="failed to get container status \"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929\": rpc error: code = NotFound desc = could not find container \"f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929\": container with ID starting with f64eb07ab8a815a0730f0539e91ceafc2a566eecb86a55bfc103e4fcf7fc3929 not found: ID does not exist" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.491164 4758 scope.go:117] "RemoveContainer" containerID="fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705" Jan 30 08:36:28 crc kubenswrapper[4758]: E0130 08:36:28.491718 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705\": container with ID starting with fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705 not found: ID does not exist" containerID="fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.491757 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705"} err="failed to get container status \"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705\": rpc error: code = NotFound desc = could not find container \"fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705\": container with ID starting with fa3b96548dac273e5f1a8a53056623718a847b4ac327280b1fb141949e11f705 not found: ID does not exist" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.491781 4758 scope.go:117] "RemoveContainer" containerID="08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601" Jan 30 08:36:28 crc kubenswrapper[4758]: E0130 08:36:28.492097 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601\": container with ID starting with 08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601 not found: ID does not exist" containerID="08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601" Jan 30 08:36:28 crc kubenswrapper[4758]: I0130 08:36:28.492129 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601"} err="failed to get container status \"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601\": rpc error: code = NotFound desc = could not find container \"08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601\": container with ID starting with 08de3b285981bd44c84d57b6db73406e4e967891abccbe9bd10eecbdf269a601 not found: ID does not exist" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.245408 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lsfmf" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.736952 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7tq2j"] Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.737518 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.737625 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.737715 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.737794 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.737873 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.737950 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738061 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738153 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738244 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738321 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738400 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738475 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738556 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738640 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738730 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738814 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.738895 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.738965 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739034 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739121 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739198 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739273 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="extract-content" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739345 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739414 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739496 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739580 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="extract-utilities" Jan 30 08:36:29 crc kubenswrapper[4758]: E0130 08:36:29.739664 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739746 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.739929 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740019 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740118 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740292 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" containerName="marketplace-operator" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740438 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.740525 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" containerName="registry-server" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.741537 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.744683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.778557 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a66643-dc8c-4c84-9345-7f98676dc1d3" path="/var/lib/kubelet/pods/49a66643-dc8c-4c84-9345-7f98676dc1d3/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.779171 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9959a40e-acf9-4c57-967c-bbd102964dbb" path="/var/lib/kubelet/pods/9959a40e-acf9-4c57-967c-bbd102964dbb/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.779872 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8160a87-6f56-4d61-a17b-8049588a293b" path="/var/lib/kubelet/pods/a8160a87-6f56-4d61-a17b-8049588a293b/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.781095 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad53cc1a-5ef4-4e05-a996-b8e53194ef37" path="/var/lib/kubelet/pods/ad53cc1a-5ef4-4e05-a996-b8e53194ef37/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.781802 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee1dd099-7697-43e1-9777-b3735c8013e8" path="/var/lib/kubelet/pods/ee1dd099-7697-43e1-9777-b3735c8013e8/volumes" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.808914 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7tq2j"] Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.822007 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p77qv\" (UniqueName: \"kubernetes.io/projected/73f8c779-64cc-4d7d-8762-4f8cf1611071-kube-api-access-p77qv\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.822159 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-catalog-content\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.822187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-utilities\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.922957 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p77qv\" (UniqueName: \"kubernetes.io/projected/73f8c779-64cc-4d7d-8762-4f8cf1611071-kube-api-access-p77qv\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.923230 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-catalog-content\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.923348 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-utilities\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.923724 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-catalog-content\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.923784 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73f8c779-64cc-4d7d-8762-4f8cf1611071-utilities\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:29 crc kubenswrapper[4758]: I0130 08:36:29.939450 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p77qv\" (UniqueName: \"kubernetes.io/projected/73f8c779-64cc-4d7d-8762-4f8cf1611071-kube-api-access-p77qv\") pod \"certified-operators-7tq2j\" (UID: \"73f8c779-64cc-4d7d-8762-4f8cf1611071\") " pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.054901 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.502849 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7tq2j"] Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.725173 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fmg9d"] Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.729925 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.735157 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.741832 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmg9d"] Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.847800 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f82fd\" (UniqueName: \"kubernetes.io/projected/8404e227-68e2-4686-a04d-00048ba303ec-kube-api-access-f82fd\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.847899 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-catalog-content\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.847932 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-utilities\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.949332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-catalog-content\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.949386 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-utilities\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.949424 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f82fd\" (UniqueName: \"kubernetes.io/projected/8404e227-68e2-4686-a04d-00048ba303ec-kube-api-access-f82fd\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.950092 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-catalog-content\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.950294 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8404e227-68e2-4686-a04d-00048ba303ec-utilities\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:30 crc kubenswrapper[4758]: I0130 08:36:30.967149 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f82fd\" (UniqueName: \"kubernetes.io/projected/8404e227-68e2-4686-a04d-00048ba303ec-kube-api-access-f82fd\") pod \"redhat-operators-fmg9d\" (UID: \"8404e227-68e2-4686-a04d-00048ba303ec\") " pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.051144 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.255912 4758 generic.go:334] "Generic (PLEG): container finished" podID="73f8c779-64cc-4d7d-8762-4f8cf1611071" containerID="f8ffe2f184595707503d9582ebebbbf88874d1a789820fcaf5af3b94d7c0101b" exitCode=0 Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.255952 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tq2j" event={"ID":"73f8c779-64cc-4d7d-8762-4f8cf1611071","Type":"ContainerDied","Data":"f8ffe2f184595707503d9582ebebbbf88874d1a789820fcaf5af3b94d7c0101b"} Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.255984 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tq2j" event={"ID":"73f8c779-64cc-4d7d-8762-4f8cf1611071","Type":"ContainerStarted","Data":"de321136531ad85f4df6f150b06b26efab30414dd9bf3b1dc457601760e55716"} Jan 30 08:36:31 crc kubenswrapper[4758]: I0130 08:36:31.464151 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmg9d"] Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.133650 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6v7r"] Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.135484 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.142653 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.149282 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6v7r"] Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.262957 4758 generic.go:334] "Generic (PLEG): container finished" podID="73f8c779-64cc-4d7d-8762-4f8cf1611071" containerID="369097a25de3facae9c45c855cc0128200e681604ff8b7e66670def5606597b7" exitCode=0 Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.263125 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tq2j" event={"ID":"73f8c779-64cc-4d7d-8762-4f8cf1611071","Type":"ContainerDied","Data":"369097a25de3facae9c45c855cc0128200e681604ff8b7e66670def5606597b7"} Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.264807 4758 generic.go:334] "Generic (PLEG): container finished" podID="8404e227-68e2-4686-a04d-00048ba303ec" containerID="c348f10a599c9eddf2e5183bbd4b31ec6c663a2d59df5538736c103ada74044f" exitCode=0 Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.264867 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerDied","Data":"c348f10a599c9eddf2e5183bbd4b31ec6c663a2d59df5538736c103ada74044f"} Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.264933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerStarted","Data":"d4055b4f3028ac245c2cf8d96e568bc6013013eda34858f6f0bf4b80d703242b"} Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.284702 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-catalog-content\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.284798 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-utilities\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.284839 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jqkw\" (UniqueName: \"kubernetes.io/projected/76812ff6-8f58-4c5c-8606-3cc8f949146e-kube-api-access-4jqkw\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.386235 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-utilities\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.386307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jqkw\" (UniqueName: \"kubernetes.io/projected/76812ff6-8f58-4c5c-8606-3cc8f949146e-kube-api-access-4jqkw\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.386382 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-catalog-content\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.387744 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-catalog-content\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.387999 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76812ff6-8f58-4c5c-8606-3cc8f949146e-utilities\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.408128 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jqkw\" (UniqueName: \"kubernetes.io/projected/76812ff6-8f58-4c5c-8606-3cc8f949146e-kube-api-access-4jqkw\") pod \"community-operators-h6v7r\" (UID: \"76812ff6-8f58-4c5c-8606-3cc8f949146e\") " pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.494522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:32 crc kubenswrapper[4758]: I0130 08:36:32.939728 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6v7r"] Jan 30 08:36:32 crc kubenswrapper[4758]: W0130 08:36:32.970086 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76812ff6_8f58_4c5c_8606_3cc8f949146e.slice/crio-5228bc1d9d8cc1aec8c3605c20defc06efab86c38ffbd240f56967e33a044bb9 WatchSource:0}: Error finding container 5228bc1d9d8cc1aec8c3605c20defc06efab86c38ffbd240f56967e33a044bb9: Status 404 returned error can't find the container with id 5228bc1d9d8cc1aec8c3605c20defc06efab86c38ffbd240f56967e33a044bb9 Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.129848 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kb7fp"] Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.131111 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.137855 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.146897 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb7fp"] Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.200352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgsd8\" (UniqueName: \"kubernetes.io/projected/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-kube-api-access-vgsd8\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.200495 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-utilities\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.200543 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-catalog-content\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.270243 4758 generic.go:334] "Generic (PLEG): container finished" podID="76812ff6-8f58-4c5c-8606-3cc8f949146e" containerID="21d836d2bd6a23ce1382948dd88b8d83c12e6454681320a0bf47bec815075963" exitCode=0 Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.270397 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7r" event={"ID":"76812ff6-8f58-4c5c-8606-3cc8f949146e","Type":"ContainerDied","Data":"21d836d2bd6a23ce1382948dd88b8d83c12e6454681320a0bf47bec815075963"} Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.270642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7r" event={"ID":"76812ff6-8f58-4c5c-8606-3cc8f949146e","Type":"ContainerStarted","Data":"5228bc1d9d8cc1aec8c3605c20defc06efab86c38ffbd240f56967e33a044bb9"} Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.272954 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerStarted","Data":"38945fb8c984b8a4edc68ba9077681ce4168f5a1108149721489c04b07874e8f"} Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.277325 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7tq2j" event={"ID":"73f8c779-64cc-4d7d-8762-4f8cf1611071","Type":"ContainerStarted","Data":"3258795c1de5b14247ec58cd933008d63c35f76eb411f28fce654c91c4c49f18"} Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.301777 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-catalog-content\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.302137 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgsd8\" (UniqueName: \"kubernetes.io/projected/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-kube-api-access-vgsd8\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.302297 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-utilities\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.302799 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-utilities\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.303145 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-catalog-content\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.324897 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7tq2j" podStartSLOduration=2.902215901 podStartE2EDuration="4.324878254s" podCreationTimestamp="2026-01-30 08:36:29 +0000 UTC" firstStartedPulling="2026-01-30 08:36:31.257444793 +0000 UTC m=+396.229756344" lastFinishedPulling="2026-01-30 08:36:32.680107146 +0000 UTC m=+397.652418697" observedRunningTime="2026-01-30 08:36:33.321804828 +0000 UTC m=+398.294116389" watchObservedRunningTime="2026-01-30 08:36:33.324878254 +0000 UTC m=+398.297189805" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.325017 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgsd8\" (UniqueName: \"kubernetes.io/projected/db6d53d3-8c72-4f16-9bf1-f196d3c85e3a-kube-api-access-vgsd8\") pod \"redhat-marketplace-kb7fp\" (UID: \"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a\") " pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.450421 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:33 crc kubenswrapper[4758]: I0130 08:36:33.952187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb7fp"] Jan 30 08:36:33 crc kubenswrapper[4758]: W0130 08:36:33.964213 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb6d53d3_8c72_4f16_9bf1_f196d3c85e3a.slice/crio-9ce8d20071920a1e52514de2715f7a8bb578ce469617b483637e0d430fac25dc WatchSource:0}: Error finding container 9ce8d20071920a1e52514de2715f7a8bb578ce469617b483637e0d430fac25dc: Status 404 returned error can't find the container with id 9ce8d20071920a1e52514de2715f7a8bb578ce469617b483637e0d430fac25dc Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.283950 4758 generic.go:334] "Generic (PLEG): container finished" podID="76812ff6-8f58-4c5c-8606-3cc8f949146e" containerID="7e0ba33626c93259522a7336776f942e30af68d09e32a4f51adb4fc2c123cfa4" exitCode=0 Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.284065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7r" event={"ID":"76812ff6-8f58-4c5c-8606-3cc8f949146e","Type":"ContainerDied","Data":"7e0ba33626c93259522a7336776f942e30af68d09e32a4f51adb4fc2c123cfa4"} Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.286130 4758 generic.go:334] "Generic (PLEG): container finished" podID="8404e227-68e2-4686-a04d-00048ba303ec" containerID="38945fb8c984b8a4edc68ba9077681ce4168f5a1108149721489c04b07874e8f" exitCode=0 Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.286404 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerDied","Data":"38945fb8c984b8a4edc68ba9077681ce4168f5a1108149721489c04b07874e8f"} Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.290693 4758 generic.go:334] "Generic (PLEG): container finished" podID="db6d53d3-8c72-4f16-9bf1-f196d3c85e3a" containerID="9dcde49a044f38586604c79fa1c732ff68334f7617a8d18dc29f782b45915718" exitCode=0 Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.290829 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb7fp" event={"ID":"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a","Type":"ContainerDied","Data":"9dcde49a044f38586604c79fa1c732ff68334f7617a8d18dc29f782b45915718"} Jan 30 08:36:34 crc kubenswrapper[4758]: I0130 08:36:34.290902 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb7fp" event={"ID":"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a","Type":"ContainerStarted","Data":"9ce8d20071920a1e52514de2715f7a8bb578ce469617b483637e0d430fac25dc"} Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.297117 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmg9d" event={"ID":"8404e227-68e2-4686-a04d-00048ba303ec","Type":"ContainerStarted","Data":"50fdb5f1c67782a83c6840bc3083ecf987283ccf213b1ae429faa56af078d857"} Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.299732 4758 generic.go:334] "Generic (PLEG): container finished" podID="db6d53d3-8c72-4f16-9bf1-f196d3c85e3a" containerID="297fb77668c5725bee55b7909c07ba5e9b3e426383b1d52c070af4bb329f9e93" exitCode=0 Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.299863 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb7fp" event={"ID":"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a","Type":"ContainerDied","Data":"297fb77668c5725bee55b7909c07ba5e9b3e426383b1d52c070af4bb329f9e93"} Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.303610 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6v7r" event={"ID":"76812ff6-8f58-4c5c-8606-3cc8f949146e","Type":"ContainerStarted","Data":"e681855d47292914569cf94fea16ebd2c6e82d35ca52f25ec4071606ca0f2baf"} Jan 30 08:36:35 crc kubenswrapper[4758]: I0130 08:36:35.323782 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fmg9d" podStartSLOduration=2.597295719 podStartE2EDuration="5.323765103s" podCreationTimestamp="2026-01-30 08:36:30 +0000 UTC" firstStartedPulling="2026-01-30 08:36:32.266447336 +0000 UTC m=+397.238758887" lastFinishedPulling="2026-01-30 08:36:34.99291672 +0000 UTC m=+399.965228271" observedRunningTime="2026-01-30 08:36:35.319580432 +0000 UTC m=+400.291891993" watchObservedRunningTime="2026-01-30 08:36:35.323765103 +0000 UTC m=+400.296076654" Jan 30 08:36:36 crc kubenswrapper[4758]: I0130 08:36:36.314367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb7fp" event={"ID":"db6d53d3-8c72-4f16-9bf1-f196d3c85e3a","Type":"ContainerStarted","Data":"7fc28c76864872ae095210db0bb098b7be187468b5278cc847c4b4f303cf18f6"} Jan 30 08:36:36 crc kubenswrapper[4758]: I0130 08:36:36.334878 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6v7r" podStartSLOduration=2.945673814 podStartE2EDuration="4.334864402s" podCreationTimestamp="2026-01-30 08:36:32 +0000 UTC" firstStartedPulling="2026-01-30 08:36:33.27292478 +0000 UTC m=+398.245236331" lastFinishedPulling="2026-01-30 08:36:34.662115368 +0000 UTC m=+399.634426919" observedRunningTime="2026-01-30 08:36:35.366777186 +0000 UTC m=+400.339088737" watchObservedRunningTime="2026-01-30 08:36:36.334864402 +0000 UTC m=+401.307175953" Jan 30 08:36:36 crc kubenswrapper[4758]: I0130 08:36:36.336204 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kb7fp" podStartSLOduration=1.9451249179999999 podStartE2EDuration="3.336199914s" podCreationTimestamp="2026-01-30 08:36:33 +0000 UTC" firstStartedPulling="2026-01-30 08:36:34.294104115 +0000 UTC m=+399.266415656" lastFinishedPulling="2026-01-30 08:36:35.685179091 +0000 UTC m=+400.657490652" observedRunningTime="2026-01-30 08:36:36.333712446 +0000 UTC m=+401.306024007" watchObservedRunningTime="2026-01-30 08:36:36.336199914 +0000 UTC m=+401.308511465" Jan 30 08:36:40 crc kubenswrapper[4758]: I0130 08:36:40.056200 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:40 crc kubenswrapper[4758]: I0130 08:36:40.056730 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:40 crc kubenswrapper[4758]: I0130 08:36:40.103313 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:40 crc kubenswrapper[4758]: I0130 08:36:40.388105 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7tq2j" Jan 30 08:36:41 crc kubenswrapper[4758]: I0130 08:36:41.051558 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:41 crc kubenswrapper[4758]: I0130 08:36:41.051610 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:41 crc kubenswrapper[4758]: I0130 08:36:41.087622 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:41 crc kubenswrapper[4758]: I0130 08:36:41.419777 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fmg9d" Jan 30 08:36:42 crc kubenswrapper[4758]: I0130 08:36:42.495430 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:42 crc kubenswrapper[4758]: I0130 08:36:42.495507 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:42 crc kubenswrapper[4758]: I0130 08:36:42.537639 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:43 crc kubenswrapper[4758]: I0130 08:36:43.402020 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6v7r" Jan 30 08:36:43 crc kubenswrapper[4758]: I0130 08:36:43.451655 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:43 crc kubenswrapper[4758]: I0130 08:36:43.451709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:43 crc kubenswrapper[4758]: I0130 08:36:43.495216 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:44 crc kubenswrapper[4758]: I0130 08:36:44.434724 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kb7fp" Jan 30 08:36:49 crc kubenswrapper[4758]: I0130 08:36:49.595520 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerName="registry" containerID="cri-o://1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" gracePeriod=30 Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.116390 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.238957 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.239019 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.239058 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.239143 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.240160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.240343 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.240513 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241340 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241383 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") pod \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\" (UID: \"467f100f-83e4-43b0-bcf0-16cfe7cb0393\") " Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241580 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241926 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.241962 4758 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.249696 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.255183 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.261596 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.262022 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx" (OuterVolumeSpecName: "kube-api-access-pn5vx") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "kube-api-access-pn5vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.265332 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.269253 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "467f100f-83e4-43b0-bcf0-16cfe7cb0393" (UID: "467f100f-83e4-43b0-bcf0-16cfe7cb0393"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.343611 4758 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/467f100f-83e4-43b0-bcf0-16cfe7cb0393-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.344062 4758 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.344188 4758 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/467f100f-83e4-43b0-bcf0-16cfe7cb0393-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.344263 4758 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.344349 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5vx\" (UniqueName: \"kubernetes.io/projected/467f100f-83e4-43b0-bcf0-16cfe7cb0393-kube-api-access-pn5vx\") on node \"crc\" DevicePath \"\"" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.417149 4758 generic.go:334] "Generic (PLEG): container finished" podID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerID="1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" exitCode=0 Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.417249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" event={"ID":"467f100f-83e4-43b0-bcf0-16cfe7cb0393","Type":"ContainerDied","Data":"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b"} Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.418233 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" event={"ID":"467f100f-83e4-43b0-bcf0-16cfe7cb0393","Type":"ContainerDied","Data":"cf6f00a97d6458a9fc656f00daeea6735f190156c2f34abea4396149d2c34aee"} Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.417332 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fd88w" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.418284 4758 scope.go:117] "RemoveContainer" containerID="1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.440167 4758 scope.go:117] "RemoveContainer" containerID="1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" Jan 30 08:36:50 crc kubenswrapper[4758]: E0130 08:36:50.440754 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b\": container with ID starting with 1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b not found: ID does not exist" containerID="1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.440802 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b"} err="failed to get container status \"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b\": rpc error: code = NotFound desc = could not find container \"1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b\": container with ID starting with 1730e9c6ad505aaa520a14f981598015a493f4214136988a8f53e14f44eb5e4b not found: ID does not exist" Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.456215 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:36:50 crc kubenswrapper[4758]: I0130 08:36:50.462577 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fd88w"] Jan 30 08:36:51 crc kubenswrapper[4758]: I0130 08:36:51.780715 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" path="/var/lib/kubelet/pods/467f100f-83e4-43b0-bcf0-16cfe7cb0393/volumes" Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.387405 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.387731 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.387778 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.388405 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:36:52 crc kubenswrapper[4758]: I0130 08:36:52.388467 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1" gracePeriod=600 Jan 30 08:36:53 crc kubenswrapper[4758]: I0130 08:36:53.437636 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1" exitCode=0 Jan 30 08:36:53 crc kubenswrapper[4758]: I0130 08:36:53.437921 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1"} Jan 30 08:36:53 crc kubenswrapper[4758]: I0130 08:36:53.437946 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9"} Jan 30 08:36:53 crc kubenswrapper[4758]: I0130 08:36:53.437964 4758 scope.go:117] "RemoveContainer" containerID="1c1b48e143c8d7ab1559b3858cfa07e9057624e1181c19ed7d2d6f77ca375916" Jan 30 08:38:52 crc kubenswrapper[4758]: I0130 08:38:52.388032 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:38:52 crc kubenswrapper[4758]: I0130 08:38:52.388826 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:39:22 crc kubenswrapper[4758]: I0130 08:39:22.386945 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:39:22 crc kubenswrapper[4758]: I0130 08:39:22.387377 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.387109 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.387709 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.387759 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.388353 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:39:52 crc kubenswrapper[4758]: I0130 08:39:52.388411 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9" gracePeriod=600 Jan 30 08:39:52 crc kubenswrapper[4758]: E0130 08:39:52.423234 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95cfcde3_10c8_4ece_a78a_9508f04a0f09.slice/crio-12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:39:53 crc kubenswrapper[4758]: I0130 08:39:53.362855 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9" exitCode=0 Jan 30 08:39:53 crc kubenswrapper[4758]: I0130 08:39:53.362929 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9"} Jan 30 08:39:53 crc kubenswrapper[4758]: I0130 08:39:53.363215 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89"} Jan 30 08:39:53 crc kubenswrapper[4758]: I0130 08:39:53.363238 4758 scope.go:117] "RemoveContainer" containerID="6250826afc1a7b9f8792f6e6a0fcf6f685f39c7f2ff42574fdb4511420aeb1d1" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.818424 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k"] Jan 30 08:41:49 crc kubenswrapper[4758]: E0130 08:41:49.819159 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerName="registry" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.819172 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerName="registry" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.819272 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="467f100f-83e4-43b0-bcf0-16cfe7cb0393" containerName="registry" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.819646 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.823442 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.823759 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.823875 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bkp4x" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.834316 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gzhzw"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.835021 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.838745 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hdm2h" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.859262 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xrm7r"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.859994 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.863452 4758 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-b4tfv" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.865509 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.882421 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gzhzw"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.889896 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqxb\" (UniqueName: \"kubernetes.io/projected/3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb-kube-api-access-mjqxb\") pod \"cert-manager-858654f9db-gzhzw\" (UID: \"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb\") " pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.889999 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbzkc\" (UniqueName: \"kubernetes.io/projected/738fc587-0a87-41b5-b2b0-690fa92d754e-kube-api-access-qbzkc\") pod \"cert-manager-cainjector-cf98fcc89-fpg8k\" (UID: \"738fc587-0a87-41b5-b2b0-690fa92d754e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.896627 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xrm7r"] Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.991804 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcvvj\" (UniqueName: \"kubernetes.io/projected/b55a68e5-f198-4525-9d07-2acdcb906d36-kube-api-access-kcvvj\") pod \"cert-manager-webhook-687f57d79b-xrm7r\" (UID: \"b55a68e5-f198-4525-9d07-2acdcb906d36\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.991896 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqxb\" (UniqueName: \"kubernetes.io/projected/3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb-kube-api-access-mjqxb\") pod \"cert-manager-858654f9db-gzhzw\" (UID: \"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb\") " pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:49 crc kubenswrapper[4758]: I0130 08:41:49.991938 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbzkc\" (UniqueName: \"kubernetes.io/projected/738fc587-0a87-41b5-b2b0-690fa92d754e-kube-api-access-qbzkc\") pod \"cert-manager-cainjector-cf98fcc89-fpg8k\" (UID: \"738fc587-0a87-41b5-b2b0-690fa92d754e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.025849 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqxb\" (UniqueName: \"kubernetes.io/projected/3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb-kube-api-access-mjqxb\") pod \"cert-manager-858654f9db-gzhzw\" (UID: \"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb\") " pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.025883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbzkc\" (UniqueName: \"kubernetes.io/projected/738fc587-0a87-41b5-b2b0-690fa92d754e-kube-api-access-qbzkc\") pod \"cert-manager-cainjector-cf98fcc89-fpg8k\" (UID: \"738fc587-0a87-41b5-b2b0-690fa92d754e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.093625 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcvvj\" (UniqueName: \"kubernetes.io/projected/b55a68e5-f198-4525-9d07-2acdcb906d36-kube-api-access-kcvvj\") pod \"cert-manager-webhook-687f57d79b-xrm7r\" (UID: \"b55a68e5-f198-4525-9d07-2acdcb906d36\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.111877 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcvvj\" (UniqueName: \"kubernetes.io/projected/b55a68e5-f198-4525-9d07-2acdcb906d36-kube-api-access-kcvvj\") pod \"cert-manager-webhook-687f57d79b-xrm7r\" (UID: \"b55a68e5-f198-4525-9d07-2acdcb906d36\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.143439 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.162437 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gzhzw" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.183076 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.432505 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xrm7r"] Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.439120 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.502223 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" event={"ID":"b55a68e5-f198-4525-9d07-2acdcb906d36","Type":"ContainerStarted","Data":"c17ae69228d5790e28ec360f328c86fb5c6b0aee926311ee50cabaae71bbab50"} Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.694005 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k"] Jan 30 08:41:50 crc kubenswrapper[4758]: W0130 08:41:50.699216 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod738fc587_0a87_41b5_b2b0_690fa92d754e.slice/crio-b7ef983c9536c1f6451ea69d63a6281964a5465fa0848979581d9f65b7c88e14 WatchSource:0}: Error finding container b7ef983c9536c1f6451ea69d63a6281964a5465fa0848979581d9f65b7c88e14: Status 404 returned error can't find the container with id b7ef983c9536c1f6451ea69d63a6281964a5465fa0848979581d9f65b7c88e14 Jan 30 08:41:50 crc kubenswrapper[4758]: I0130 08:41:50.706009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gzhzw"] Jan 30 08:41:51 crc kubenswrapper[4758]: I0130 08:41:51.513307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gzhzw" event={"ID":"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb","Type":"ContainerStarted","Data":"4dfb8f90ca285635cf40a8dd2396665461fab51bd5d515cde87c62f01c695018"} Jan 30 08:41:51 crc kubenswrapper[4758]: I0130 08:41:51.519670 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" event={"ID":"738fc587-0a87-41b5-b2b0-690fa92d754e","Type":"ContainerStarted","Data":"b7ef983c9536c1f6451ea69d63a6281964a5465fa0848979581d9f65b7c88e14"} Jan 30 08:41:52 crc kubenswrapper[4758]: I0130 08:41:52.387565 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:41:52 crc kubenswrapper[4758]: I0130 08:41:52.388026 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.927110 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d2cb9"] Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930768 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-controller" containerID="cri-o://1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930786 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="sbdb" containerID="cri-o://527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930819 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="nbdb" containerID="cri-o://38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930832 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-node" containerID="cri-o://86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930845 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-acl-logging" containerID="cri-o://0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930820 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.930857 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="northd" containerID="cri-o://4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32" gracePeriod=30 Jan 30 08:41:58 crc kubenswrapper[4758]: I0130 08:41:58.974092 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" containerID="cri-o://b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353" gracePeriod=30 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.578196 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/2.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.578997 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/1.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.579066 4758 generic.go:334] "Generic (PLEG): container finished" podID="fac75e9c-fc94-4c83-8613-bce0f4744079" containerID="52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19" exitCode=2 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.579130 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerDied","Data":"52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.579170 4758 scope.go:117] "RemoveContainer" containerID="e128a263c0c274be4aee06977a09e15d61f8025b144f3206dc8b401896b086f1" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.579938 4758 scope.go:117] "RemoveContainer" containerID="52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19" Jan 30 08:41:59 crc kubenswrapper[4758]: E0130 08:41:59.580348 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-99ddw_openshift-multus(fac75e9c-fc94-4c83-8613-bce0f4744079)\"" pod="openshift-multus/multus-99ddw" podUID="fac75e9c-fc94-4c83-8613-bce0f4744079" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.592594 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.601483 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-acl-logging/0.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602275 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-controller/0.log" Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602789 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602825 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602837 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602851 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602861 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602873 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38" exitCode=0 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602882 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3" exitCode=143 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602894 4758 generic.go:334] "Generic (PLEG): container finished" podID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerID="1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda" exitCode=143 Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.602928 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603321 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603343 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603559 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603572 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603586 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3"} Jan 30 08:41:59 crc kubenswrapper[4758]: I0130 08:41:59.603596 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda"} Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.765844 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovnkube-controller/3.log" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.769798 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-acl-logging/0.log" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.770859 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-controller/0.log" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.771971 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850405 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-557d2"] Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850766 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-node" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850787 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-node" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850802 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="nbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850812 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="nbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850829 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="sbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850837 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="sbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850850 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850858 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850869 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kubecfg-setup" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850876 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kubecfg-setup" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850888 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850896 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850905 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850912 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850922 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850929 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850938 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-acl-logging" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850947 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-acl-logging" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850960 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850969 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.850981 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="northd" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.850988 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="northd" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.851002 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851010 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851505 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851523 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851535 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851547 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-node" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851557 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="nbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851567 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovn-acl-logging" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851578 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="northd" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851587 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851596 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851607 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851618 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851631 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="sbdb" Jan 30 08:42:00 crc kubenswrapper[4758]: E0130 08:42:00.851776 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.851787 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" containerName="ovnkube-controller" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.854139 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866025 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866097 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-script-lib\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866130 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-systemd-units\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866256 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-env-overrides\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-kubelet\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866350 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-var-lib-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57x7v\" (UniqueName: \"kubernetes.io/projected/3d8c2967-cb4a-439c-a136-e9cf2bf41635-kube-api-access-57x7v\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866467 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-netd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866500 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-log-socket\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866558 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-slash\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866605 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-netns\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866669 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-node-log\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866695 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-ovn\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866722 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-bin\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866760 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovn-node-metrics-cert\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866827 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-config\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-etc-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.866937 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-systemd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967657 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967731 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967791 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967841 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967879 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967898 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967937 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.967977 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968007 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968032 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968085 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968125 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968144 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968167 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968184 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968221 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968255 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968285 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968323 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968368 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") pod \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\" (UID: \"a682aa56-1a48-46dd-a06c-8cbaaeea7008\") " Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968526 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968586 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-env-overrides\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968612 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968639 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-kubelet\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968644 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968635 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log" (OuterVolumeSpecName: "node-log") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968711 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket" (OuterVolumeSpecName: "log-socket") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968675 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968739 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-var-lib-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968873 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968914 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968941 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968978 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.968667 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-var-lib-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969123 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57x7v\" (UniqueName: \"kubernetes.io/projected/3d8c2967-cb4a-439c-a136-e9cf2bf41635-kube-api-access-57x7v\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969166 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-netd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969196 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-log-socket\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969225 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-slash\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969255 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-netns\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-node-log\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969356 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-ovn\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969399 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-bin\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969430 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovn-node-metrics-cert\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969493 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-config\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-etc-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969553 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-systemd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969582 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969613 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-script-lib\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969642 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-systemd-units\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969668 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969708 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-kubelet\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969777 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969743 4758 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969858 4758 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969885 4758 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969909 4758 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969942 4758 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969965 4758 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969991 4758 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970013 4758 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970053 4758 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970069 4758 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970138 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-bin\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970192 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-cni-netd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970241 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-log-socket\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970294 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-slash\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970380 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-netns\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970419 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-node-log\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970451 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-ovn\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.970486 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-host-run-ovn-kubernetes\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971032 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-etc-openvswitch\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969575 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969699 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969745 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash" (OuterVolumeSpecName: "host-slash") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969749 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969770 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969788 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.969889 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971298 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-run-systemd\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971410 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-config\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971340 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d8c2967-cb4a-439c-a136-e9cf2bf41635-systemd-units\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.971835 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovnkube-script-lib\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.974401 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d8c2967-cb4a-439c-a136-e9cf2bf41635-env-overrides\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.975716 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.976030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9" (OuterVolumeSpecName: "kube-api-access-jmdj9") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "kube-api-access-jmdj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.986920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d8c2967-cb4a-439c-a136-e9cf2bf41635-ovn-node-metrics-cert\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.991524 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a682aa56-1a48-46dd-a06c-8cbaaeea7008" (UID: "a682aa56-1a48-46dd-a06c-8cbaaeea7008"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:42:00 crc kubenswrapper[4758]: I0130 08:42:00.996685 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57x7v\" (UniqueName: \"kubernetes.io/projected/3d8c2967-cb4a-439c-a136-e9cf2bf41635-kube-api-access-57x7v\") pod \"ovnkube-node-557d2\" (UID: \"3d8c2967-cb4a-439c-a136-e9cf2bf41635\") " pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.071912 4758 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072568 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072585 4758 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072594 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072605 4758 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072631 4758 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072639 4758 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a682aa56-1a48-46dd-a06c-8cbaaeea7008-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072649 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmdj9\" (UniqueName: \"kubernetes.io/projected/a682aa56-1a48-46dd-a06c-8cbaaeea7008-kube-api-access-jmdj9\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072660 4758 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.072669 4758 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a682aa56-1a48-46dd-a06c-8cbaaeea7008-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.183536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.479582 4758 scope.go:117] "RemoveContainer" containerID="8651815aea3d8021ae5f78afc701de835f4a70924c2f0c2c2b10c757561fd540" Jan 30 08:42:01 crc kubenswrapper[4758]: W0130 08:42:01.540904 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d8c2967_cb4a_439c_a136_e9cf2bf41635.slice/crio-cd595b71ce331668cca6b01630641ec955581696e808d9c860ce200ddd7eb8c1 WatchSource:0}: Error finding container cd595b71ce331668cca6b01630641ec955581696e808d9c860ce200ddd7eb8c1: Status 404 returned error can't find the container with id cd595b71ce331668cca6b01630641ec955581696e808d9c860ce200ddd7eb8c1 Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.622095 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/2.log" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.624306 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"cd595b71ce331668cca6b01630641ec955581696e808d9c860ce200ddd7eb8c1"} Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.629514 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-acl-logging/0.log" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.634635 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-d2cb9_a682aa56-1a48-46dd-a06c-8cbaaeea7008/ovn-controller/0.log" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.635298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" event={"ID":"a682aa56-1a48-46dd-a06c-8cbaaeea7008","Type":"ContainerDied","Data":"8d70505dbacf380ad755907b0497b938a8a8916ec3d2072e37bb1856843d9c78"} Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.635360 4758 scope.go:117] "RemoveContainer" containerID="b0101d7301a7b51eb2ab1a83d9b6004c067fa83b3b027df6dd62ac9569ce0353" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.635397 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d2cb9" Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.687029 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d2cb9"] Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.692314 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d2cb9"] Jan 30 08:42:01 crc kubenswrapper[4758]: I0130 08:42:01.777018 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a682aa56-1a48-46dd-a06c-8cbaaeea7008" path="/var/lib/kubelet/pods/a682aa56-1a48-46dd-a06c-8cbaaeea7008/volumes" Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.653206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gzhzw" event={"ID":"3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb","Type":"ContainerStarted","Data":"8377d73a4684655c2a82cb528ad0fb1b898b43892124aba97c643b512a64648b"} Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.655601 4758 generic.go:334] "Generic (PLEG): container finished" podID="3d8c2967-cb4a-439c-a136-e9cf2bf41635" containerID="58dd1c3becbb82dc17e2b1b5f2fc944ca7c5583be8eb280dd1814d2c877c4b6b" exitCode=0 Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.656855 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerDied","Data":"58dd1c3becbb82dc17e2b1b5f2fc944ca7c5583be8eb280dd1814d2c877c4b6b"} Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.681087 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gzhzw" podStartSLOduration=2.910203879 podStartE2EDuration="13.681057518s" podCreationTimestamp="2026-01-30 08:41:49 +0000 UTC" firstStartedPulling="2026-01-30 08:41:50.71862958 +0000 UTC m=+715.690941131" lastFinishedPulling="2026-01-30 08:42:01.489483199 +0000 UTC m=+726.461794770" observedRunningTime="2026-01-30 08:42:02.67891428 +0000 UTC m=+727.651225831" watchObservedRunningTime="2026-01-30 08:42:02.681057518 +0000 UTC m=+727.653369069" Jan 30 08:42:02 crc kubenswrapper[4758]: I0130 08:42:02.845909 4758 scope.go:117] "RemoveContainer" containerID="527be0f4db4a1308f25425ddf6502b340b7aa94f83928e819c1a28135b5d5d26" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.001702 4758 scope.go:117] "RemoveContainer" containerID="38f5598b49c50d55e797464586e18537c771b70ab7aa818edddd39ea903e5cee" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.028911 4758 scope.go:117] "RemoveContainer" containerID="4f1c1c30b0c8f147e4488b21ea2d369b5f1503ce11dd82c4fac0d4f3ab572d32" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.069749 4758 scope.go:117] "RemoveContainer" containerID="29043be8b494062e40595d4238e6469a75f739c19214c2cfc4dcc921f76bbee4" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.145803 4758 scope.go:117] "RemoveContainer" containerID="86ac6cdea7c7db019ada60654a4c7c5aea0bc94f178f6a630d91f2f5552b4e38" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.173581 4758 scope.go:117] "RemoveContainer" containerID="0a651def785457e885e90f390888063ee290a3414b0544f9969ed450e7fe47e3" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.201536 4758 scope.go:117] "RemoveContainer" containerID="1197e78d0e9ecc9f1c262d7c9a19a923a9061206b932bdf42afcaa154243afda" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.222258 4758 scope.go:117] "RemoveContainer" containerID="4cfa8a6d5d2d55d686e93f1600f0dee49fd4fce13cb343d4299d816991adecac" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.664075 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" event={"ID":"b55a68e5-f198-4525-9d07-2acdcb906d36","Type":"ContainerStarted","Data":"55f9dfbd031e898d8f56752e6f4dfeb70527bb7340f05a9baebca7593e77cfd5"} Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.664294 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.667737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" event={"ID":"738fc587-0a87-41b5-b2b0-690fa92d754e","Type":"ContainerStarted","Data":"372f884e9ea99c21a8b85066994ca44362b21b89d8f91c39facae410df48956e"} Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.670233 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"b814f7e08f55cc08d1cdadd4666991121543574ad19b6bb1896293d340442000"} Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.710069 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fpg8k" podStartSLOduration=2.40806927 podStartE2EDuration="14.710001744s" podCreationTimestamp="2026-01-30 08:41:49 +0000 UTC" firstStartedPulling="2026-01-30 08:41:50.702014074 +0000 UTC m=+715.674325625" lastFinishedPulling="2026-01-30 08:42:03.003946548 +0000 UTC m=+727.976258099" observedRunningTime="2026-01-30 08:42:03.708524218 +0000 UTC m=+728.680835789" watchObservedRunningTime="2026-01-30 08:42:03.710001744 +0000 UTC m=+728.682313315" Jan 30 08:42:03 crc kubenswrapper[4758]: I0130 08:42:03.713687 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" podStartSLOduration=2.135347036 podStartE2EDuration="14.71365972s" podCreationTimestamp="2026-01-30 08:41:49 +0000 UTC" firstStartedPulling="2026-01-30 08:41:50.438894984 +0000 UTC m=+715.411206535" lastFinishedPulling="2026-01-30 08:42:03.017207668 +0000 UTC m=+727.989519219" observedRunningTime="2026-01-30 08:42:03.6928059 +0000 UTC m=+728.665117451" watchObservedRunningTime="2026-01-30 08:42:03.71365972 +0000 UTC m=+728.685971291" Jan 30 08:42:04 crc kubenswrapper[4758]: I0130 08:42:04.694949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"595c178e39d07f52d751d6e0f1cdab175348de3d322f2715dc38a2eb70b3bfce"} Jan 30 08:42:04 crc kubenswrapper[4758]: I0130 08:42:04.695676 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"b17a964ab4800d994bb8b580b45aa2134a5a3fdcf5a0e6a87de944b836848c71"} Jan 30 08:42:04 crc kubenswrapper[4758]: I0130 08:42:04.695730 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"369e2afd4f78c8246fcc075eeafce29b8fe130ce6b8e56c2cfac0441f8beecc8"} Jan 30 08:42:05 crc kubenswrapper[4758]: I0130 08:42:05.712335 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"13f62c64a08e71dc87d3e1605e904c4b12751a8aeb95147da3075aed15b26457"} Jan 30 08:42:05 crc kubenswrapper[4758]: I0130 08:42:05.713421 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"8bedf04c03bace50024c96304db46454c9a0aea841aab257eec91fc697f48f29"} Jan 30 08:42:05 crc kubenswrapper[4758]: I0130 08:42:05.713543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"920aea4add09045c60a3fcf096bd8b842e677c46130406634200f5d83451de18"} Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.735167 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" event={"ID":"3d8c2967-cb4a-439c-a136-e9cf2bf41635","Type":"ContainerStarted","Data":"19dc707a12956bede6344b0ae76c746d26cd4c8502210b96e037552735be6c49"} Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.736848 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.737080 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.737098 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.772547 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" podStartSLOduration=8.772524038 podStartE2EDuration="8.772524038s" podCreationTimestamp="2026-01-30 08:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:42:08.769792471 +0000 UTC m=+733.742104032" watchObservedRunningTime="2026-01-30 08:42:08.772524038 +0000 UTC m=+733.744835589" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.781935 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:08 crc kubenswrapper[4758]: I0130 08:42:08.783619 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:10 crc kubenswrapper[4758]: I0130 08:42:10.186764 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-xrm7r" Jan 30 08:42:11 crc kubenswrapper[4758]: I0130 08:42:11.769786 4758 scope.go:117] "RemoveContainer" containerID="52cb65a07b895a3f9c811e540c2852dc09469aa1336caa2d4f74c566cc414a19" Jan 30 08:42:12 crc kubenswrapper[4758]: I0130 08:42:12.768168 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-99ddw_fac75e9c-fc94-4c83-8613-bce0f4744079/kube-multus/2.log" Jan 30 08:42:12 crc kubenswrapper[4758]: I0130 08:42:12.768867 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-99ddw" event={"ID":"fac75e9c-fc94-4c83-8613-bce0f4744079","Type":"ContainerStarted","Data":"8cef8c13864175ab940b1c8ed4f018e46504930a6b92827b9643af64ee512783"} Jan 30 08:42:22 crc kubenswrapper[4758]: I0130 08:42:22.387544 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:42:22 crc kubenswrapper[4758]: I0130 08:42:22.388956 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:42:31 crc kubenswrapper[4758]: I0130 08:42:31.209603 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-557d2" Jan 30 08:42:42 crc kubenswrapper[4758]: I0130 08:42:42.601024 4758 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.669111 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j"] Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.671188 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.674581 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.680599 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j"] Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.685988 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.686122 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.686152 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.787248 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.787296 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.787329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.787986 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.788489 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.817053 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:47 crc kubenswrapper[4758]: I0130 08:42:47.999647 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:48 crc kubenswrapper[4758]: I0130 08:42:48.226856 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j"] Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.003351 4758 generic.go:334] "Generic (PLEG): container finished" podID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerID="8012f80827dd5c036964251a703d12dbaac9d88ff31a50056c753ab46fd88585" exitCode=0 Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.003417 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerDied","Data":"8012f80827dd5c036964251a703d12dbaac9d88ff31a50056c753ab46fd88585"} Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.003497 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerStarted","Data":"b3e0f62cb4bc5d8b6a20dfc2bbf5a0d0e42ad874a77713041b53630c0c9d86e1"} Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.744664 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.746465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.756208 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.823385 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.823662 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.824474 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.925808 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.926450 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.926498 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.927136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.927781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:49 crc kubenswrapper[4758]: I0130 08:42:49.952187 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") pod \"redhat-operators-772qh\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:50 crc kubenswrapper[4758]: I0130 08:42:50.071465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:42:50 crc kubenswrapper[4758]: I0130 08:42:50.374958 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:42:50 crc kubenswrapper[4758]: W0130 08:42:50.385113 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a576a2_9393_4f2d_be9a_9838b65db788.slice/crio-547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25 WatchSource:0}: Error finding container 547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25: Status 404 returned error can't find the container with id 547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25 Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.019606 4758 generic.go:334] "Generic (PLEG): container finished" podID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerID="21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c" exitCode=0 Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.019694 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerDied","Data":"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c"} Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.020212 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerStarted","Data":"547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25"} Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.031250 4758 generic.go:334] "Generic (PLEG): container finished" podID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerID="22a66147960313ccb4344ea98be24dcf3ea44f81d2d13024cfb477679df4dfbb" exitCode=0 Jan 30 08:42:51 crc kubenswrapper[4758]: I0130 08:42:51.031302 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerDied","Data":"22a66147960313ccb4344ea98be24dcf3ea44f81d2d13024cfb477679df4dfbb"} Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.042258 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerStarted","Data":"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669"} Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.047106 4758 generic.go:334] "Generic (PLEG): container finished" podID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerID="1356d91fae4bc7767083755b0d4679f06ccf69b868527fde791cde1618c25d0a" exitCode=0 Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.047277 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerDied","Data":"1356d91fae4bc7767083755b0d4679f06ccf69b868527fde791cde1618c25d0a"} Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387052 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387110 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387158 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387826 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:42:52 crc kubenswrapper[4758]: I0130 08:42:52.387885 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89" gracePeriod=600 Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.057023 4758 generic.go:334] "Generic (PLEG): container finished" podID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerID="4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669" exitCode=0 Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.057116 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerDied","Data":"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669"} Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.060621 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89" exitCode=0 Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.060715 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89"} Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.060770 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6"} Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.060791 4758 scope.go:117] "RemoveContainer" containerID="12e346bef216c1af5460ebc9181d8141b01cef9a0c0f222c452a8bead4bdf6f9" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.361389 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.378014 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") pod \"e581bd2a-bbb4-476e-821e-f55ba597f41e\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.378130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") pod \"e581bd2a-bbb4-476e-821e-f55ba597f41e\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.378322 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") pod \"e581bd2a-bbb4-476e-821e-f55ba597f41e\" (UID: \"e581bd2a-bbb4-476e-821e-f55ba597f41e\") " Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.379659 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle" (OuterVolumeSpecName: "bundle") pod "e581bd2a-bbb4-476e-821e-f55ba597f41e" (UID: "e581bd2a-bbb4-476e-821e-f55ba597f41e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.388499 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g" (OuterVolumeSpecName: "kube-api-access-d8g7g") pod "e581bd2a-bbb4-476e-821e-f55ba597f41e" (UID: "e581bd2a-bbb4-476e-821e-f55ba597f41e"). InnerVolumeSpecName "kube-api-access-d8g7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.410509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util" (OuterVolumeSpecName: "util") pod "e581bd2a-bbb4-476e-821e-f55ba597f41e" (UID: "e581bd2a-bbb4-476e-821e-f55ba597f41e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.480252 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.480340 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8g7g\" (UniqueName: \"kubernetes.io/projected/e581bd2a-bbb4-476e-821e-f55ba597f41e-kube-api-access-d8g7g\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:53 crc kubenswrapper[4758]: I0130 08:42:53.480356 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e581bd2a-bbb4-476e-821e-f55ba597f41e-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:42:54 crc kubenswrapper[4758]: I0130 08:42:54.073456 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" event={"ID":"e581bd2a-bbb4-476e-821e-f55ba597f41e","Type":"ContainerDied","Data":"b3e0f62cb4bc5d8b6a20dfc2bbf5a0d0e42ad874a77713041b53630c0c9d86e1"} Jan 30 08:42:54 crc kubenswrapper[4758]: I0130 08:42:54.074183 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e0f62cb4bc5d8b6a20dfc2bbf5a0d0e42ad874a77713041b53630c0c9d86e1" Jan 30 08:42:54 crc kubenswrapper[4758]: I0130 08:42:54.073497 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j" Jan 30 08:42:55 crc kubenswrapper[4758]: I0130 08:42:55.091163 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerStarted","Data":"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e"} Jan 30 08:42:55 crc kubenswrapper[4758]: I0130 08:42:55.119511 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-772qh" podStartSLOduration=2.63789184 podStartE2EDuration="6.119479891s" podCreationTimestamp="2026-01-30 08:42:49 +0000 UTC" firstStartedPulling="2026-01-30 08:42:51.028194688 +0000 UTC m=+776.000506249" lastFinishedPulling="2026-01-30 08:42:54.509782749 +0000 UTC m=+779.482094300" observedRunningTime="2026-01-30 08:42:55.113400737 +0000 UTC m=+780.085712328" watchObservedRunningTime="2026-01-30 08:42:55.119479891 +0000 UTC m=+780.091791442" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.084553 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r9vhp"] Jan 30 08:42:58 crc kubenswrapper[4758]: E0130 08:42:58.085354 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="extract" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.085372 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="extract" Jan 30 08:42:58 crc kubenswrapper[4758]: E0130 08:42:58.085396 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="util" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.085403 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="util" Jan 30 08:42:58 crc kubenswrapper[4758]: E0130 08:42:58.085411 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="pull" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.085417 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="pull" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.085538 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e581bd2a-bbb4-476e-821e-f55ba597f41e" containerName="extract" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.086264 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.089526 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.090314 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-c9stz" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.090498 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.105484 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r9vhp"] Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.154841 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2nj\" (UniqueName: \"kubernetes.io/projected/1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1-kube-api-access-6q2nj\") pod \"nmstate-operator-646758c888-r9vhp\" (UID: \"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1\") " pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.255886 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2nj\" (UniqueName: \"kubernetes.io/projected/1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1-kube-api-access-6q2nj\") pod \"nmstate-operator-646758c888-r9vhp\" (UID: \"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1\") " pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.288547 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2nj\" (UniqueName: \"kubernetes.io/projected/1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1-kube-api-access-6q2nj\") pod \"nmstate-operator-646758c888-r9vhp\" (UID: \"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1\") " pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.410567 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" Jan 30 08:42:58 crc kubenswrapper[4758]: I0130 08:42:58.734512 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r9vhp"] Jan 30 08:42:59 crc kubenswrapper[4758]: I0130 08:42:59.117887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" event={"ID":"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1","Type":"ContainerStarted","Data":"e7e537fc52ef0e35371b58f873d73081815cc17f84a2bfbd5a1288ac12f4fc67"} Jan 30 08:43:00 crc kubenswrapper[4758]: I0130 08:43:00.072478 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:00 crc kubenswrapper[4758]: I0130 08:43:00.072558 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:01 crc kubenswrapper[4758]: I0130 08:43:01.125409 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-772qh" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" probeResult="failure" output=< Jan 30 08:43:01 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:43:01 crc kubenswrapper[4758]: > Jan 30 08:43:02 crc kubenswrapper[4758]: I0130 08:43:02.141110 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" event={"ID":"1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1","Type":"ContainerStarted","Data":"e4d2416678b0b8bb5dbe44644a74eb9a0014be11575be6a2c758d8f95d23a15f"} Jan 30 08:43:02 crc kubenswrapper[4758]: I0130 08:43:02.159213 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-r9vhp" podStartSLOduration=1.795149173 podStartE2EDuration="4.159182599s" podCreationTimestamp="2026-01-30 08:42:58 +0000 UTC" firstStartedPulling="2026-01-30 08:42:58.755324864 +0000 UTC m=+783.727636415" lastFinishedPulling="2026-01-30 08:43:01.11935829 +0000 UTC m=+786.091669841" observedRunningTime="2026-01-30 08:43:02.157133405 +0000 UTC m=+787.129444986" watchObservedRunningTime="2026-01-30 08:43:02.159182599 +0000 UTC m=+787.131494150" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.691146 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9d5tr"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.693153 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.703519 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hlt5d" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.704104 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.704867 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.707501 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.717723 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9d5tr"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.720599 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vmfs\" (UniqueName: \"kubernetes.io/projected/767b8ab5-0385-4ee5-a65c-20d58550812e-kube-api-access-5vmfs\") pod \"nmstate-metrics-54757c584b-9d5tr\" (UID: \"767b8ab5-0385-4ee5-a65c-20d58550812e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.746227 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.763377 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jjswr"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.764894 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.822238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.822622 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkcv\" (UniqueName: \"kubernetes.io/projected/619a7108-d329-4b73-84eb-4258a2bfe118-kube-api-access-mmkcv\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.822765 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-dbus-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.822974 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-ovs-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.823117 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vmfs\" (UniqueName: \"kubernetes.io/projected/767b8ab5-0385-4ee5-a65c-20d58550812e-kube-api-access-5vmfs\") pod \"nmstate-metrics-54757c584b-9d5tr\" (UID: \"767b8ab5-0385-4ee5-a65c-20d58550812e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.823251 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-nmstate-lock\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.824469 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64fp\" (UniqueName: \"kubernetes.io/projected/b3b445ff-f326-4f7a-9f01-557ea0ac488e-kube-api-access-g64fp\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.847695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vmfs\" (UniqueName: \"kubernetes.io/projected/767b8ab5-0385-4ee5-a65c-20d58550812e-kube-api-access-5vmfs\") pod \"nmstate-metrics-54757c584b-9d5tr\" (UID: \"767b8ab5-0385-4ee5-a65c-20d58550812e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.913456 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj"] Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.914574 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.918404 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.918414 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.923779 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nm8hd" Jan 30 08:43:07 crc kubenswrapper[4758]: E0130 08:43:07.927510 4758 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 08:43:07 crc kubenswrapper[4758]: E0130 08:43:07.927598 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair podName:b3b445ff-f326-4f7a-9f01-557ea0ac488e nodeName:}" failed. No retries permitted until 2026-01-30 08:43:08.427572465 +0000 UTC m=+793.399884016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-tp5p2" (UID: "b3b445ff-f326-4f7a-9f01-557ea0ac488e") : secret "openshift-nmstate-webhook" not found Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.927550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928426 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkcv\" (UniqueName: \"kubernetes.io/projected/619a7108-d329-4b73-84eb-4258a2bfe118-kube-api-access-mmkcv\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928462 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-dbus-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-ovs-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928672 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-nmstate-lock\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928703 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g64fp\" (UniqueName: \"kubernetes.io/projected/b3b445ff-f326-4f7a-9f01-557ea0ac488e-kube-api-access-g64fp\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.928967 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-dbus-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.929015 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-ovs-socket\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.929057 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/619a7108-d329-4b73-84eb-4258a2bfe118-nmstate-lock\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.946078 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g64fp\" (UniqueName: \"kubernetes.io/projected/b3b445ff-f326-4f7a-9f01-557ea0ac488e-kube-api-access-g64fp\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.952281 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkcv\" (UniqueName: \"kubernetes.io/projected/619a7108-d329-4b73-84eb-4258a2bfe118-kube-api-access-mmkcv\") pod \"nmstate-handler-jjswr\" (UID: \"619a7108-d329-4b73-84eb-4258a2bfe118\") " pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:07 crc kubenswrapper[4758]: I0130 08:43:07.961170 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.026809 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.084248 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.135283 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edf1a46e-1ddb-45c6-b545-911d0f651ee9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.135339 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7tn\" (UniqueName: \"kubernetes.io/projected/edf1a46e-1ddb-45c6-b545-911d0f651ee9-kube-api-access-qd7tn\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.135381 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edf1a46e-1ddb-45c6-b545-911d0f651ee9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.135463 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56f8498f99-h8fr5"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.136432 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.199949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jjswr" event={"ID":"619a7108-d329-4b73-84eb-4258a2bfe118","Type":"ContainerStarted","Data":"45895ae2e4834a412ff7fcb3335322f58e676cd3627f4a3eb53e671583ed9844"} Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.208774 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56f8498f99-h8fr5"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253072 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfjf\" (UniqueName: \"kubernetes.io/projected/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-kube-api-access-dcfjf\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-oauth-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253525 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edf1a46e-1ddb-45c6-b545-911d0f651ee9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253571 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd7tn\" (UniqueName: \"kubernetes.io/projected/edf1a46e-1ddb-45c6-b545-911d0f651ee9-kube-api-access-qd7tn\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253597 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edf1a46e-1ddb-45c6-b545-911d0f651ee9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253629 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-oauth-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253645 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-trusted-ca-bundle\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253673 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.253706 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-service-ca\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.255264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/edf1a46e-1ddb-45c6-b545-911d0f651ee9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.261746 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/edf1a46e-1ddb-45c6-b545-911d0f651ee9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.291738 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd7tn\" (UniqueName: \"kubernetes.io/projected/edf1a46e-1ddb-45c6-b545-911d0f651ee9-kube-api-access-qd7tn\") pod \"nmstate-console-plugin-7754f76f8b-5w4jj\" (UID: \"edf1a46e-1ddb-45c6-b545-911d0f651ee9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355176 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfjf\" (UniqueName: \"kubernetes.io/projected/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-kube-api-access-dcfjf\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355307 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-oauth-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355371 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-oauth-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355387 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-trusted-ca-bundle\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355410 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.355445 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-service-ca\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.356404 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-service-ca\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.357358 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-oauth-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.358168 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-trusted-ca-bundle\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.360462 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-oauth-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.361094 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-config\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.376069 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-console-serving-cert\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.409359 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfjf\" (UniqueName: \"kubernetes.io/projected/abd6f4b8-59ba-4439-b155-9a283fbc4b3f-kube-api-access-dcfjf\") pod \"console-56f8498f99-h8fr5\" (UID: \"abd6f4b8-59ba-4439-b155-9a283fbc4b3f\") " pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.456793 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.461313 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.466795 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b3b445ff-f326-4f7a-9f01-557ea0ac488e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tp5p2\" (UID: \"b3b445ff-f326-4f7a-9f01-557ea0ac488e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.528127 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9d5tr"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.538235 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.632979 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.845510 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj"] Jan 30 08:43:08 crc kubenswrapper[4758]: W0130 08:43:08.853186 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedf1a46e_1ddb_45c6_b545_911d0f651ee9.slice/crio-162e1b32ab8ebe97547d442742b7ee9d0680b7cba6388a9652ec200aacb60b18 WatchSource:0}: Error finding container 162e1b32ab8ebe97547d442742b7ee9d0680b7cba6388a9652ec200aacb60b18: Status 404 returned error can't find the container with id 162e1b32ab8ebe97547d442742b7ee9d0680b7cba6388a9652ec200aacb60b18 Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.899972 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2"] Jan 30 08:43:08 crc kubenswrapper[4758]: I0130 08:43:08.992533 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56f8498f99-h8fr5"] Jan 30 08:43:09 crc kubenswrapper[4758]: I0130 08:43:09.208004 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" event={"ID":"edf1a46e-1ddb-45c6-b545-911d0f651ee9","Type":"ContainerStarted","Data":"162e1b32ab8ebe97547d442742b7ee9d0680b7cba6388a9652ec200aacb60b18"} Jan 30 08:43:09 crc kubenswrapper[4758]: I0130 08:43:09.209872 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" event={"ID":"767b8ab5-0385-4ee5-a65c-20d58550812e","Type":"ContainerStarted","Data":"61218ee06e860d4810de00528cdf4b89f598247f92db6ed94b39fe24c030bdc9"} Jan 30 08:43:09 crc kubenswrapper[4758]: I0130 08:43:09.211781 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" event={"ID":"b3b445ff-f326-4f7a-9f01-557ea0ac488e","Type":"ContainerStarted","Data":"0925d2b376eef54054cff6c78963b3250bb0044d238a21f8290c2a06ae917a04"} Jan 30 08:43:09 crc kubenswrapper[4758]: I0130 08:43:09.212796 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56f8498f99-h8fr5" event={"ID":"abd6f4b8-59ba-4439-b155-9a283fbc4b3f","Type":"ContainerStarted","Data":"9fcab79ffc5670c61c3339bbf9ce6d69794843ecfc576fb91882524d57fdee36"} Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.129974 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.192254 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.226477 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56f8498f99-h8fr5" event={"ID":"abd6f4b8-59ba-4439-b155-9a283fbc4b3f","Type":"ContainerStarted","Data":"56ccd8385b23655c33ba34f74b898eb838ee72a21760b3705a3c53a8b0a07c72"} Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.258565 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56f8498f99-h8fr5" podStartSLOduration=2.258537318 podStartE2EDuration="2.258537318s" podCreationTimestamp="2026-01-30 08:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:43:10.244934236 +0000 UTC m=+795.217245797" watchObservedRunningTime="2026-01-30 08:43:10.258537318 +0000 UTC m=+795.230848879" Jan 30 08:43:10 crc kubenswrapper[4758]: I0130 08:43:10.373699 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.233964 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-772qh" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" containerID="cri-o://33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" gracePeriod=2 Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.647245 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.820750 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") pod \"c5a576a2-9393-4f2d-be9a-9838b65db788\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.820800 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") pod \"c5a576a2-9393-4f2d-be9a-9838b65db788\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.820833 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") pod \"c5a576a2-9393-4f2d-be9a-9838b65db788\" (UID: \"c5a576a2-9393-4f2d-be9a-9838b65db788\") " Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.822335 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities" (OuterVolumeSpecName: "utilities") pod "c5a576a2-9393-4f2d-be9a-9838b65db788" (UID: "c5a576a2-9393-4f2d-be9a-9838b65db788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.827434 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9" (OuterVolumeSpecName: "kube-api-access-ctqh9") pod "c5a576a2-9393-4f2d-be9a-9838b65db788" (UID: "c5a576a2-9393-4f2d-be9a-9838b65db788"). InnerVolumeSpecName "kube-api-access-ctqh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.921839 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctqh9\" (UniqueName: \"kubernetes.io/projected/c5a576a2-9393-4f2d-be9a-9838b65db788-kube-api-access-ctqh9\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.922297 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:11 crc kubenswrapper[4758]: I0130 08:43:11.953231 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5a576a2-9393-4f2d-be9a-9838b65db788" (UID: "c5a576a2-9393-4f2d-be9a-9838b65db788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.023090 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a576a2-9393-4f2d-be9a-9838b65db788-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.242161 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" event={"ID":"767b8ab5-0385-4ee5-a65c-20d58550812e","Type":"ContainerStarted","Data":"b86d0cbb8f23390f3b6c78572cf6ec14bd320a0326fb326faf4a2e5e823d4a47"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.245980 4758 generic.go:334] "Generic (PLEG): container finished" podID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerID="33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" exitCode=0 Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.246170 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-772qh" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.248226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerDied","Data":"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.248278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-772qh" event={"ID":"c5a576a2-9393-4f2d-be9a-9838b65db788","Type":"ContainerDied","Data":"547b02c7c24847c7c021c15373098498e1bae9c391de3b79295090820c70ad25"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.248307 4758 scope.go:117] "RemoveContainer" containerID="33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.254648 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" event={"ID":"b3b445ff-f326-4f7a-9f01-557ea0ac488e","Type":"ContainerStarted","Data":"1ce5fb66e3661009471de20890f3ae7bbabd4e5c657b723bbe5f2cd02660eb20"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.254730 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.259749 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jjswr" event={"ID":"619a7108-d329-4b73-84eb-4258a2bfe118","Type":"ContainerStarted","Data":"f7ca91b2991049b0fe4eaf7c7c82407c77b9ba3ac87941ac9888be454775df81"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.259845 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.262674 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" event={"ID":"edf1a46e-1ddb-45c6-b545-911d0f651ee9","Type":"ContainerStarted","Data":"3b83e15b267938fe6bee4924ac1d862999ed51b63cc0cc5ae918d82deaec66ca"} Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.291343 4758 scope.go:117] "RemoveContainer" containerID="4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.293069 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" podStartSLOduration=2.738296622 podStartE2EDuration="5.293015362s" podCreationTimestamp="2026-01-30 08:43:07 +0000 UTC" firstStartedPulling="2026-01-30 08:43:08.910702922 +0000 UTC m=+793.883014473" lastFinishedPulling="2026-01-30 08:43:11.465421662 +0000 UTC m=+796.437733213" observedRunningTime="2026-01-30 08:43:12.281787235 +0000 UTC m=+797.254098786" watchObservedRunningTime="2026-01-30 08:43:12.293015362 +0000 UTC m=+797.265326913" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.308178 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.327850 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-772qh"] Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.345782 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jjswr" podStartSLOduration=2.026049551 podStartE2EDuration="5.345729499s" podCreationTimestamp="2026-01-30 08:43:07 +0000 UTC" firstStartedPulling="2026-01-30 08:43:08.147265022 +0000 UTC m=+793.119576573" lastFinishedPulling="2026-01-30 08:43:11.46694497 +0000 UTC m=+796.439256521" observedRunningTime="2026-01-30 08:43:12.342695213 +0000 UTC m=+797.315006764" watchObservedRunningTime="2026-01-30 08:43:12.345729499 +0000 UTC m=+797.318041050" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.348737 4758 scope.go:117] "RemoveContainer" containerID="21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.367201 4758 scope.go:117] "RemoveContainer" containerID="33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" Jan 30 08:43:12 crc kubenswrapper[4758]: E0130 08:43:12.368583 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e\": container with ID starting with 33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e not found: ID does not exist" containerID="33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.368625 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e"} err="failed to get container status \"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e\": rpc error: code = NotFound desc = could not find container \"33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e\": container with ID starting with 33062457f167ccb9a466bcb9712f517d9d8b836929b8adf1b9bf1918dabe3b4e not found: ID does not exist" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.368655 4758 scope.go:117] "RemoveContainer" containerID="4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669" Jan 30 08:43:12 crc kubenswrapper[4758]: E0130 08:43:12.369071 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669\": container with ID starting with 4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669 not found: ID does not exist" containerID="4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.369091 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669"} err="failed to get container status \"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669\": rpc error: code = NotFound desc = could not find container \"4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669\": container with ID starting with 4ffa69a4beefc7d916b25f6bb251267a672af3c0cce42df440b2537ce7324669 not found: ID does not exist" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.369107 4758 scope.go:117] "RemoveContainer" containerID="21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c" Jan 30 08:43:12 crc kubenswrapper[4758]: E0130 08:43:12.373139 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c\": container with ID starting with 21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c not found: ID does not exist" containerID="21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.373197 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c"} err="failed to get container status \"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c\": rpc error: code = NotFound desc = could not find container \"21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c\": container with ID starting with 21a30ee45d65b75c52b92ea0b3948d1833e934e98a734ae9ccf69a83d3658b6c not found: ID does not exist" Jan 30 08:43:12 crc kubenswrapper[4758]: I0130 08:43:12.371879 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5w4jj" podStartSLOduration=2.762533424 podStartE2EDuration="5.3718608s" podCreationTimestamp="2026-01-30 08:43:07 +0000 UTC" firstStartedPulling="2026-01-30 08:43:08.854858146 +0000 UTC m=+793.827169687" lastFinishedPulling="2026-01-30 08:43:11.464185512 +0000 UTC m=+796.436497063" observedRunningTime="2026-01-30 08:43:12.368201684 +0000 UTC m=+797.340513235" watchObservedRunningTime="2026-01-30 08:43:12.3718608 +0000 UTC m=+797.344172351" Jan 30 08:43:13 crc kubenswrapper[4758]: I0130 08:43:13.782040 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" path="/var/lib/kubelet/pods/c5a576a2-9393-4f2d-be9a-9838b65db788/volumes" Jan 30 08:43:14 crc kubenswrapper[4758]: I0130 08:43:14.290959 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" event={"ID":"767b8ab5-0385-4ee5-a65c-20d58550812e","Type":"ContainerStarted","Data":"ea191eb859d02d4c9faffa4631985a445c2a18b04fdcb175113e7ab494127fb8"} Jan 30 08:43:14 crc kubenswrapper[4758]: I0130 08:43:14.315045 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-9d5tr" podStartSLOduration=1.795156567 podStartE2EDuration="7.315007999s" podCreationTimestamp="2026-01-30 08:43:07 +0000 UTC" firstStartedPulling="2026-01-30 08:43:08.568065455 +0000 UTC m=+793.540376996" lastFinishedPulling="2026-01-30 08:43:14.087916877 +0000 UTC m=+799.060228428" observedRunningTime="2026-01-30 08:43:14.310733894 +0000 UTC m=+799.283045445" watchObservedRunningTime="2026-01-30 08:43:14.315007999 +0000 UTC m=+799.287319560" Jan 30 08:43:18 crc kubenswrapper[4758]: I0130 08:43:18.130938 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jjswr" Jan 30 08:43:18 crc kubenswrapper[4758]: I0130 08:43:18.464910 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:18 crc kubenswrapper[4758]: I0130 08:43:18.466451 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:18 crc kubenswrapper[4758]: I0130 08:43:18.472559 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:19 crc kubenswrapper[4758]: I0130 08:43:19.350152 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56f8498f99-h8fr5" Jan 30 08:43:19 crc kubenswrapper[4758]: I0130 08:43:19.429342 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:43:28 crc kubenswrapper[4758]: I0130 08:43:28.638790 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tp5p2" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.664699 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r"] Jan 30 08:43:41 crc kubenswrapper[4758]: E0130 08:43:41.666312 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="extract-utilities" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.666394 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="extract-utilities" Jan 30 08:43:41 crc kubenswrapper[4758]: E0130 08:43:41.666467 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="extract-content" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.666543 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="extract-content" Jan 30 08:43:41 crc kubenswrapper[4758]: E0130 08:43:41.666619 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.666688 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.667136 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a576a2-9393-4f2d-be9a-9838b65db788" containerName="registry-server" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.668057 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.670359 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.681186 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r"] Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.705234 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.705311 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.705345 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806029 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806157 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806183 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.806626 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.825980 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:41 crc kubenswrapper[4758]: I0130 08:43:41.987404 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:42 crc kubenswrapper[4758]: I0130 08:43:42.230847 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r"] Jan 30 08:43:42 crc kubenswrapper[4758]: I0130 08:43:42.511206 4758 generic.go:334] "Generic (PLEG): container finished" podID="749e159a-10a8-4704-a263-3ec389807647" containerID="a1330550da406b3a314a2c4212ad1f066a03110886e5d8024b4f4a295f618305" exitCode=0 Jan 30 08:43:42 crc kubenswrapper[4758]: I0130 08:43:42.511253 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerDied","Data":"a1330550da406b3a314a2c4212ad1f066a03110886e5d8024b4f4a295f618305"} Jan 30 08:43:42 crc kubenswrapper[4758]: I0130 08:43:42.511320 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerStarted","Data":"1709d0d8c255b42989366e9b1fc41959e4146ebf0442e6e462d4a5a0dd35530b"} Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.487506 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rxgh6" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" containerID="cri-o://360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" gracePeriod=15 Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.521560 4758 generic.go:334] "Generic (PLEG): container finished" podID="749e159a-10a8-4704-a263-3ec389807647" containerID="1b82ce90d5ccfceb91d2dcb88837d441c4158ede174198a27ed1d455438532ca" exitCode=0 Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.521830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerDied","Data":"1b82ce90d5ccfceb91d2dcb88837d441c4158ede174198a27ed1d455438532ca"} Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.896330 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rxgh6_df190322-1e43-4ae4-ac74-78702c913801/console/0.log" Jan 30 08:43:44 crc kubenswrapper[4758]: I0130 08:43:44.896394 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.060545 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.060816 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.060936 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061062 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061188 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061281 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061369 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") pod \"df190322-1e43-4ae4-ac74-78702c913801\" (UID: \"df190322-1e43-4ae4-ac74-78702c913801\") " Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061544 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061661 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061554 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config" (OuterVolumeSpecName: "console-config") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061847 4758 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.061939 4758 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.062010 4758 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.062031 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca" (OuterVolumeSpecName: "service-ca") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.066838 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t" (OuterVolumeSpecName: "kube-api-access-8tl5t") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "kube-api-access-8tl5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.067320 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.068971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "df190322-1e43-4ae4-ac74-78702c913801" (UID: "df190322-1e43-4ae4-ac74-78702c913801"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.162626 4758 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df190322-1e43-4ae4-ac74-78702c913801-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.162655 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tl5t\" (UniqueName: \"kubernetes.io/projected/df190322-1e43-4ae4-ac74-78702c913801-kube-api-access-8tl5t\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.162667 4758 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.162675 4758 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/df190322-1e43-4ae4-ac74-78702c913801-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.528907 4758 generic.go:334] "Generic (PLEG): container finished" podID="749e159a-10a8-4704-a263-3ec389807647" containerID="539d0a2948c93c60e222e24187e4ed5c60f13aa4a1fa6df7c697a4475bfc8e01" exitCode=0 Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.528991 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerDied","Data":"539d0a2948c93c60e222e24187e4ed5c60f13aa4a1fa6df7c697a4475bfc8e01"} Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532201 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rxgh6_df190322-1e43-4ae4-ac74-78702c913801/console/0.log" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532244 4758 generic.go:334] "Generic (PLEG): container finished" podID="df190322-1e43-4ae4-ac74-78702c913801" containerID="360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" exitCode=2 Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532272 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rxgh6" event={"ID":"df190322-1e43-4ae4-ac74-78702c913801","Type":"ContainerDied","Data":"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3"} Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rxgh6" event={"ID":"df190322-1e43-4ae4-ac74-78702c913801","Type":"ContainerDied","Data":"773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4"} Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532318 4758 scope.go:117] "RemoveContainer" containerID="360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.532427 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rxgh6" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.555308 4758 scope.go:117] "RemoveContainer" containerID="360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" Jan 30 08:43:45 crc kubenswrapper[4758]: E0130 08:43:45.555603 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3\": container with ID starting with 360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3 not found: ID does not exist" containerID="360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.555626 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3"} err="failed to get container status \"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3\": rpc error: code = NotFound desc = could not find container \"360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3\": container with ID starting with 360ce69b72ecc5363e82ec27c4aa68c8d3effc70fc6c6b830198ad6c0fa70ab3 not found: ID does not exist" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.564683 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.568069 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rxgh6"] Jan 30 08:43:45 crc kubenswrapper[4758]: E0130 08:43:45.599572 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf190322_1e43_4ae4_ac74_78702c913801.slice/crio-773264022dc035335d95553995a9e9c2d29eb6d0d5d52eedc14a38ba389878c4\": RecentStats: unable to find data in memory cache]" Jan 30 08:43:45 crc kubenswrapper[4758]: I0130 08:43:45.776255 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df190322-1e43-4ae4-ac74-78702c913801" path="/var/lib/kubelet/pods/df190322-1e43-4ae4-ac74-78702c913801/volumes" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.751705 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.885312 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") pod \"749e159a-10a8-4704-a263-3ec389807647\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.885398 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") pod \"749e159a-10a8-4704-a263-3ec389807647\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.885517 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") pod \"749e159a-10a8-4704-a263-3ec389807647\" (UID: \"749e159a-10a8-4704-a263-3ec389807647\") " Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.886868 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle" (OuterVolumeSpecName: "bundle") pod "749e159a-10a8-4704-a263-3ec389807647" (UID: "749e159a-10a8-4704-a263-3ec389807647"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.892097 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx" (OuterVolumeSpecName: "kube-api-access-m55sx") pod "749e159a-10a8-4704-a263-3ec389807647" (UID: "749e159a-10a8-4704-a263-3ec389807647"). InnerVolumeSpecName "kube-api-access-m55sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.898687 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util" (OuterVolumeSpecName: "util") pod "749e159a-10a8-4704-a263-3ec389807647" (UID: "749e159a-10a8-4704-a263-3ec389807647"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.987000 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.987033 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/749e159a-10a8-4704-a263-3ec389807647-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:46 crc kubenswrapper[4758]: I0130 08:43:46.987059 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m55sx\" (UniqueName: \"kubernetes.io/projected/749e159a-10a8-4704-a263-3ec389807647-kube-api-access-m55sx\") on node \"crc\" DevicePath \"\"" Jan 30 08:43:47 crc kubenswrapper[4758]: I0130 08:43:47.544973 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" event={"ID":"749e159a-10a8-4704-a263-3ec389807647","Type":"ContainerDied","Data":"1709d0d8c255b42989366e9b1fc41959e4146ebf0442e6e462d4a5a0dd35530b"} Jan 30 08:43:47 crc kubenswrapper[4758]: I0130 08:43:47.545025 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1709d0d8c255b42989366e9b1fc41959e4146ebf0442e6e462d4a5a0dd35530b" Jan 30 08:43:47 crc kubenswrapper[4758]: I0130 08:43:47.545071 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.088324 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4"] Jan 30 08:43:55 crc kubenswrapper[4758]: E0130 08:43:55.089133 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="extract" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089150 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="extract" Jan 30 08:43:55 crc kubenswrapper[4758]: E0130 08:43:55.089160 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089167 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" Jan 30 08:43:55 crc kubenswrapper[4758]: E0130 08:43:55.089180 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="util" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089188 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="util" Jan 30 08:43:55 crc kubenswrapper[4758]: E0130 08:43:55.089202 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="pull" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089209 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="pull" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089337 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="749e159a-10a8-4704-a263-3ec389807647" containerName="extract" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089351 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="df190322-1e43-4ae4-ac74-78702c913801" containerName="console" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.089917 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.100779 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.101906 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.104606 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.104865 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.104937 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qldd4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.140482 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4"] Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.191232 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-webhook-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.191292 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-apiservice-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.191379 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4mv\" (UniqueName: \"kubernetes.io/projected/29744c9b-d424-4ac9-b224-fe0956166373-kube-api-access-7q4mv\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.293141 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q4mv\" (UniqueName: \"kubernetes.io/projected/29744c9b-d424-4ac9-b224-fe0956166373-kube-api-access-7q4mv\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.293237 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-webhook-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.293266 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-apiservice-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.300720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-apiservice-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.312299 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29744c9b-d424-4ac9-b224-fe0956166373-webhook-cert\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.338904 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q4mv\" (UniqueName: \"kubernetes.io/projected/29744c9b-d424-4ac9-b224-fe0956166373-kube-api-access-7q4mv\") pod \"metallb-operator-controller-manager-75c8688755-rr2t4\" (UID: \"29744c9b-d424-4ac9-b224-fe0956166373\") " pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.404831 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.571531 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr"] Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.572492 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.574245 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.574435 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7cwlc" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.574887 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.626800 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr"] Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.698891 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcvdn\" (UniqueName: \"kubernetes.io/projected/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-kube-api-access-gcvdn\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.698966 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-apiservice-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.698991 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-webhook-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.800421 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcvdn\" (UniqueName: \"kubernetes.io/projected/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-kube-api-access-gcvdn\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.800504 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-apiservice-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.800529 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-webhook-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.845641 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.859413 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-webhook-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.859901 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-apiservice-cert\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.863442 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcvdn\" (UniqueName: \"kubernetes.io/projected/79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5-kube-api-access-gcvdn\") pod \"metallb-operator-webhook-server-7ffc5c558b-h88wr\" (UID: \"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5\") " pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.893679 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7cwlc" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.905140 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:43:55 crc kubenswrapper[4758]: I0130 08:43:55.975374 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4"] Jan 30 08:43:55 crc kubenswrapper[4758]: W0130 08:43:55.985762 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29744c9b_d424_4ac9_b224_fe0956166373.slice/crio-cdb53d7ef7f68f9e9c815230c8003f61d7a416280fbe4fb6c8fc83aa797aa889 WatchSource:0}: Error finding container cdb53d7ef7f68f9e9c815230c8003f61d7a416280fbe4fb6c8fc83aa797aa889: Status 404 returned error can't find the container with id cdb53d7ef7f68f9e9c815230c8003f61d7a416280fbe4fb6c8fc83aa797aa889 Jan 30 08:43:56 crc kubenswrapper[4758]: I0130 08:43:56.185843 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr"] Jan 30 08:43:56 crc kubenswrapper[4758]: I0130 08:43:56.588296 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" event={"ID":"29744c9b-d424-4ac9-b224-fe0956166373","Type":"ContainerStarted","Data":"cdb53d7ef7f68f9e9c815230c8003f61d7a416280fbe4fb6c8fc83aa797aa889"} Jan 30 08:43:56 crc kubenswrapper[4758]: I0130 08:43:56.589737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" event={"ID":"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5","Type":"ContainerStarted","Data":"476884484fef409049f88b6d1273b20e66d6f353661cead64c7718bfa8998b7d"} Jan 30 08:43:59 crc kubenswrapper[4758]: I0130 08:43:59.608520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" event={"ID":"29744c9b-d424-4ac9-b224-fe0956166373","Type":"ContainerStarted","Data":"624c9783e5df601e4eb9fa9d9ff41b2749f66eefb95954fa6337ab34d8746560"} Jan 30 08:43:59 crc kubenswrapper[4758]: I0130 08:43:59.608894 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:43:59 crc kubenswrapper[4758]: I0130 08:43:59.643524 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" podStartSLOduration=1.26199278 podStartE2EDuration="4.643506098s" podCreationTimestamp="2026-01-30 08:43:55 +0000 UTC" firstStartedPulling="2026-01-30 08:43:55.98890687 +0000 UTC m=+840.961218421" lastFinishedPulling="2026-01-30 08:43:59.370420178 +0000 UTC m=+844.342731739" observedRunningTime="2026-01-30 08:43:59.642330091 +0000 UTC m=+844.614641642" watchObservedRunningTime="2026-01-30 08:43:59.643506098 +0000 UTC m=+844.615817659" Jan 30 08:44:02 crc kubenswrapper[4758]: I0130 08:44:02.633590 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" event={"ID":"79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5","Type":"ContainerStarted","Data":"d170ad99e54219da543d100f434197ce5a9698b95de4fd59c050f5d7c53d9f88"} Jan 30 08:44:02 crc kubenswrapper[4758]: I0130 08:44:02.634235 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:44:02 crc kubenswrapper[4758]: I0130 08:44:02.706826 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" podStartSLOduration=1.821817979 podStartE2EDuration="7.706805339s" podCreationTimestamp="2026-01-30 08:43:55 +0000 UTC" firstStartedPulling="2026-01-30 08:43:56.199798597 +0000 UTC m=+841.172110148" lastFinishedPulling="2026-01-30 08:44:02.084785957 +0000 UTC m=+847.057097508" observedRunningTime="2026-01-30 08:44:02.702829904 +0000 UTC m=+847.675141485" watchObservedRunningTime="2026-01-30 08:44:02.706805339 +0000 UTC m=+847.679116910" Jan 30 08:44:15 crc kubenswrapper[4758]: I0130 08:44:15.906760 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7ffc5c558b-h88wr" Jan 30 08:44:35 crc kubenswrapper[4758]: I0130 08:44:35.407404 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75c8688755-rr2t4" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.215618 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-vfjq6"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.218552 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.221829 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dblws" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.222096 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.222216 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.230097 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.236508 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.236572 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.246262 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323235 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-sockets\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323275 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-conf\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323322 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z52pl\" (UniqueName: \"kubernetes.io/projected/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-kube-api-access-z52pl\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323348 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-reloader\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323368 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-startup\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.323437 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.332825 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-67xgc"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.333628 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.358355 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.358586 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v5sql" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.358702 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.358837 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.415631 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-28gk8"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.416801 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.424626 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: E0130 08:44:36.424764 4758 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 08:44:36 crc kubenswrapper[4758]: E0130 08:44:36.424823 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs podName:c8f7b5d4-29b4-4741-9bcf-a993dbbce575 nodeName:}" failed. No retries permitted until 2026-01-30 08:44:36.924803094 +0000 UTC m=+881.897114645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs") pod "frr-k8s-vfjq6" (UID: "c8f7b5d4-29b4-4741-9bcf-a993dbbce575") : secret "frr-k8s-certs-secret" not found Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425077 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5gfl\" (UniqueName: \"kubernetes.io/projected/0393e366-eeba-40c9-8020-9b16d0092dfd-kube-api-access-s5gfl\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-metrics-certs\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10c38902-7117-4dc3-ad90-eb26dd9656de-metallb-excludel2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425191 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46jx2\" (UniqueName: \"kubernetes.io/projected/10c38902-7117-4dc3-ad90-eb26dd9656de-kube-api-access-46jx2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425207 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-sockets\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425224 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-conf\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425241 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425271 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z52pl\" (UniqueName: \"kubernetes.io/projected/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-kube-api-access-z52pl\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-reloader\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0393e366-eeba-40c9-8020-9b16d0092dfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.425322 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-startup\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426274 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426365 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-conf\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426457 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-reloader\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426625 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-startup\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.426651 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-frr-sockets\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.432711 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.470818 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-28gk8"] Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.481816 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z52pl\" (UniqueName: \"kubernetes.io/projected/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-kube-api-access-z52pl\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5gfl\" (UniqueName: \"kubernetes.io/projected/0393e366-eeba-40c9-8020-9b16d0092dfd-kube-api-access-s5gfl\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526379 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-metrics-certs\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526428 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-metrics-certs\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10c38902-7117-4dc3-ad90-eb26dd9656de-metallb-excludel2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526495 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46jx2\" (UniqueName: \"kubernetes.io/projected/10c38902-7117-4dc3-ad90-eb26dd9656de-kube-api-access-46jx2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526514 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfxkz\" (UniqueName: \"kubernetes.io/projected/30993dee-7712-48e7-a156-86293a84ea40-kube-api-access-vfxkz\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526534 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526583 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-cert\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.526603 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0393e366-eeba-40c9-8020-9b16d0092dfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.529734 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10c38902-7117-4dc3-ad90-eb26dd9656de-metallb-excludel2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.531058 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0393e366-eeba-40c9-8020-9b16d0092dfd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: E0130 08:44:36.531458 4758 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 08:44:36 crc kubenswrapper[4758]: E0130 08:44:36.531499 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist podName:10c38902-7117-4dc3-ad90-eb26dd9656de nodeName:}" failed. No retries permitted until 2026-01-30 08:44:37.031487136 +0000 UTC m=+882.003798677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist") pod "speaker-67xgc" (UID: "10c38902-7117-4dc3-ad90-eb26dd9656de") : secret "metallb-memberlist" not found Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.555491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-metrics-certs\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.566658 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5gfl\" (UniqueName: \"kubernetes.io/projected/0393e366-eeba-40c9-8020-9b16d0092dfd-kube-api-access-s5gfl\") pod \"frr-k8s-webhook-server-7df86c4f6c-kjhjn\" (UID: \"0393e366-eeba-40c9-8020-9b16d0092dfd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.570329 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.581088 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46jx2\" (UniqueName: \"kubernetes.io/projected/10c38902-7117-4dc3-ad90-eb26dd9656de-kube-api-access-46jx2\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.628006 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfxkz\" (UniqueName: \"kubernetes.io/projected/30993dee-7712-48e7-a156-86293a84ea40-kube-api-access-vfxkz\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.628095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-cert\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.628144 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-metrics-certs\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.631870 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-metrics-certs\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.636315 4758 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.644827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30993dee-7712-48e7-a156-86293a84ea40-cert\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.651754 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfxkz\" (UniqueName: \"kubernetes.io/projected/30993dee-7712-48e7-a156-86293a84ea40-kube-api-access-vfxkz\") pod \"controller-6968d8fdc4-28gk8\" (UID: \"30993dee-7712-48e7-a156-86293a84ea40\") " pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.734270 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.931823 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:36 crc kubenswrapper[4758]: I0130 08:44:36.940017 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8f7b5d4-29b4-4741-9bcf-a993dbbce575-metrics-certs\") pod \"frr-k8s-vfjq6\" (UID: \"c8f7b5d4-29b4-4741-9bcf-a993dbbce575\") " pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.033992 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:37 crc kubenswrapper[4758]: E0130 08:44:37.034264 4758 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 08:44:37 crc kubenswrapper[4758]: E0130 08:44:37.034382 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist podName:10c38902-7117-4dc3-ad90-eb26dd9656de nodeName:}" failed. No retries permitted until 2026-01-30 08:44:38.034329963 +0000 UTC m=+883.006641514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist") pod "speaker-67xgc" (UID: "10c38902-7117-4dc3-ad90-eb26dd9656de") : secret "metallb-memberlist" not found Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.058059 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn"] Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.105153 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-28gk8"] Jan 30 08:44:37 crc kubenswrapper[4758]: W0130 08:44:37.110193 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30993dee_7712_48e7_a156_86293a84ea40.slice/crio-83b2ba871823a0c324846e00ba32cb6730aae27210c4eb0683c37e2121e68bb5 WatchSource:0}: Error finding container 83b2ba871823a0c324846e00ba32cb6730aae27210c4eb0683c37e2121e68bb5: Status 404 returned error can't find the container with id 83b2ba871823a0c324846e00ba32cb6730aae27210c4eb0683c37e2121e68bb5 Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.154491 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.807084 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" event={"ID":"0393e366-eeba-40c9-8020-9b16d0092dfd","Type":"ContainerStarted","Data":"da3ca2cdc27d5ffd520fd635e97f4aaa57b72211a323ee724ba6b0d2c5e4af80"} Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.808533 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-28gk8" event={"ID":"30993dee-7712-48e7-a156-86293a84ea40","Type":"ContainerStarted","Data":"ebe9472869a92cb053722bf97bf776314958ef06fd1cf63bdb8f3608045a8b97"} Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.808564 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-28gk8" event={"ID":"30993dee-7712-48e7-a156-86293a84ea40","Type":"ContainerStarted","Data":"3e54a6c107cf56e6bc4150199ff9bc7049cd6443e346b0ba698ec17dedc5d67d"} Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.808582 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-28gk8" event={"ID":"30993dee-7712-48e7-a156-86293a84ea40","Type":"ContainerStarted","Data":"83b2ba871823a0c324846e00ba32cb6730aae27210c4eb0683c37e2121e68bb5"} Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.809561 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:37 crc kubenswrapper[4758]: I0130 08:44:37.810495 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"717c004b2b3954b8edb26aabb4d187dc7b2a366b896978bcd1ef98115c02652d"} Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.049035 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.054539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10c38902-7117-4dc3-ad90-eb26dd9656de-memberlist\") pod \"speaker-67xgc\" (UID: \"10c38902-7117-4dc3-ad90-eb26dd9656de\") " pod="metallb-system/speaker-67xgc" Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.153548 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-67xgc" Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.838227 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67xgc" event={"ID":"10c38902-7117-4dc3-ad90-eb26dd9656de","Type":"ContainerStarted","Data":"3d19062d3ced847174b716ec36288a7d03906fca45d21cd130c81c8ebf4692f4"} Jan 30 08:44:38 crc kubenswrapper[4758]: I0130 08:44:38.838268 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67xgc" event={"ID":"10c38902-7117-4dc3-ad90-eb26dd9656de","Type":"ContainerStarted","Data":"be40c7a84bc977470834422b11a82801eb3ebebb4c5e007b6a973c2dc940ad28"} Jan 30 08:44:39 crc kubenswrapper[4758]: I0130 08:44:39.849569 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67xgc" event={"ID":"10c38902-7117-4dc3-ad90-eb26dd9656de","Type":"ContainerStarted","Data":"01d615c550515ba0a1a32634ef47dce8e001df825f334aeebca691722684a74a"} Jan 30 08:44:39 crc kubenswrapper[4758]: I0130 08:44:39.866217 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-28gk8" podStartSLOduration=3.866197403 podStartE2EDuration="3.866197403s" podCreationTimestamp="2026-01-30 08:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:44:37.830167987 +0000 UTC m=+882.802479558" watchObservedRunningTime="2026-01-30 08:44:39.866197403 +0000 UTC m=+884.838508954" Jan 30 08:44:39 crc kubenswrapper[4758]: I0130 08:44:39.868208 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-67xgc" podStartSLOduration=3.868201166 podStartE2EDuration="3.868201166s" podCreationTimestamp="2026-01-30 08:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:44:39.866540583 +0000 UTC m=+884.838852124" watchObservedRunningTime="2026-01-30 08:44:39.868201166 +0000 UTC m=+884.840512717" Jan 30 08:44:41 crc kubenswrapper[4758]: I0130 08:44:41.002017 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-67xgc" Jan 30 08:44:48 crc kubenswrapper[4758]: I0130 08:44:48.157601 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-67xgc" Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.214082 4758 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b5d4-29b4-4741-9bcf-a993dbbce575" containerID="773297938fde593aa5746bf664e90261f77554f915b037fa617cd59e99188af1" exitCode=0 Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.214371 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerDied","Data":"773297938fde593aa5746bf664e90261f77554f915b037fa617cd59e99188af1"} Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.219724 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" event={"ID":"0393e366-eeba-40c9-8020-9b16d0092dfd","Type":"ContainerStarted","Data":"544c34bafdf4ddd7f6c1105ce53a5a195635e27980839bf34c0b7eacc43dcc86"} Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.220403 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:44:50 crc kubenswrapper[4758]: I0130 08:44:50.299292 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" podStartSLOduration=1.442547219 podStartE2EDuration="14.299266621s" podCreationTimestamp="2026-01-30 08:44:36 +0000 UTC" firstStartedPulling="2026-01-30 08:44:37.064719148 +0000 UTC m=+882.037030699" lastFinishedPulling="2026-01-30 08:44:49.92143855 +0000 UTC m=+894.893750101" observedRunningTime="2026-01-30 08:44:50.274549364 +0000 UTC m=+895.246860915" watchObservedRunningTime="2026-01-30 08:44:50.299266621 +0000 UTC m=+895.271578192" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.227787 4758 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b5d4-29b4-4741-9bcf-a993dbbce575" containerID="45f332d8f7698818dc6e493762afd7053beb5f35fc8b21b73d8bd865f62a8dc0" exitCode=0 Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.229380 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerDied","Data":"45f332d8f7698818dc6e493762afd7053beb5f35fc8b21b73d8bd865f62a8dc0"} Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.827434 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.828993 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.831226 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.831618 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rxr69" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.831623 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.851145 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:51 crc kubenswrapper[4758]: I0130 08:44:51.936734 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") pod \"openstack-operator-index-mdv5w\" (UID: \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\") " pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.038383 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") pod \"openstack-operator-index-mdv5w\" (UID: \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\") " pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.064268 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") pod \"openstack-operator-index-mdv5w\" (UID: \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\") " pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.152303 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.240906 4758 generic.go:334] "Generic (PLEG): container finished" podID="c8f7b5d4-29b4-4741-9bcf-a993dbbce575" containerID="8cb1da3d8bf62ef21118c10b0bc9dc75a479757108dfeab5310ac8c2e5a8f3ae" exitCode=0 Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.241206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerDied","Data":"8cb1da3d8bf62ef21118c10b0bc9dc75a479757108dfeab5310ac8c2e5a8f3ae"} Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.387747 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.387816 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:44:52 crc kubenswrapper[4758]: I0130 08:44:52.423015 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:52 crc kubenswrapper[4758]: W0130 08:44:52.430958 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94969b49_36b1_4fcf_9fb3_54b56db5b8f6.slice/crio-d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f WatchSource:0}: Error finding container d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f: Status 404 returned error can't find the container with id d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f Jan 30 08:44:53 crc kubenswrapper[4758]: I0130 08:44:53.248050 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"2c1c74700c9bcc888bcc960984c74204a45d8221b2b14efe16cbbf2d10f5efe5"} Jan 30 08:44:53 crc kubenswrapper[4758]: I0130 08:44:53.249175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"34b6d60c439f116164b388d5ceb4ad0586358ece65d107884c91456757381b30"} Jan 30 08:44:53 crc kubenswrapper[4758]: I0130 08:44:53.249284 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdv5w" event={"ID":"94969b49-36b1-4fcf-9fb3-54b56db5b8f6","Type":"ContainerStarted","Data":"d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f"} Jan 30 08:44:54 crc kubenswrapper[4758]: I0130 08:44:54.279156 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"c62a320e0370cdb3f732fb2eeb91ed4cd48443ed51f58c5a1953293fb3b638ea"} Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.001155 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.608179 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zh8ld"] Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.609021 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.619518 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zh8ld"] Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.693822 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9qsn\" (UniqueName: \"kubernetes.io/projected/43724cfc-11f7-4ded-9561-4bde1020015f-kube-api-access-d9qsn\") pod \"openstack-operator-index-zh8ld\" (UID: \"43724cfc-11f7-4ded-9561-4bde1020015f\") " pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.794898 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9qsn\" (UniqueName: \"kubernetes.io/projected/43724cfc-11f7-4ded-9561-4bde1020015f-kube-api-access-d9qsn\") pod \"openstack-operator-index-zh8ld\" (UID: \"43724cfc-11f7-4ded-9561-4bde1020015f\") " pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.834953 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9qsn\" (UniqueName: \"kubernetes.io/projected/43724cfc-11f7-4ded-9561-4bde1020015f-kube-api-access-d9qsn\") pod \"openstack-operator-index-zh8ld\" (UID: \"43724cfc-11f7-4ded-9561-4bde1020015f\") " pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:55 crc kubenswrapper[4758]: I0130 08:44:55.932504 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.298348 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"7af26f108b37c5184a8ef155aa0a255cc45c06d954560d6de60e851cd91e8ea0"} Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.298414 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"37f324624ea32820799303f521f70cfb5c3fe381b0fc5dc439a42fa4b7a67985"} Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.299431 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdv5w" event={"ID":"94969b49-36b1-4fcf-9fb3-54b56db5b8f6","Type":"ContainerStarted","Data":"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622"} Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.299614 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mdv5w" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerName="registry-server" containerID="cri-o://863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" gracePeriod=2 Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.323274 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mdv5w" podStartSLOduration=2.040726619 podStartE2EDuration="5.323219416s" podCreationTimestamp="2026-01-30 08:44:51 +0000 UTC" firstStartedPulling="2026-01-30 08:44:52.434225365 +0000 UTC m=+897.406536916" lastFinishedPulling="2026-01-30 08:44:55.716718162 +0000 UTC m=+900.689029713" observedRunningTime="2026-01-30 08:44:56.321552144 +0000 UTC m=+901.293863705" watchObservedRunningTime="2026-01-30 08:44:56.323219416 +0000 UTC m=+901.295530967" Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.634553 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zh8ld"] Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.745506 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-28gk8" Jan 30 08:44:56 crc kubenswrapper[4758]: I0130 08:44:56.930349 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.129411 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") pod \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\" (UID: \"94969b49-36b1-4fcf-9fb3-54b56db5b8f6\") " Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.136518 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx" (OuterVolumeSpecName: "kube-api-access-7tqmx") pod "94969b49-36b1-4fcf-9fb3-54b56db5b8f6" (UID: "94969b49-36b1-4fcf-9fb3-54b56db5b8f6"). InnerVolumeSpecName "kube-api-access-7tqmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.230947 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tqmx\" (UniqueName: \"kubernetes.io/projected/94969b49-36b1-4fcf-9fb3-54b56db5b8f6-kube-api-access-7tqmx\") on node \"crc\" DevicePath \"\"" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.306759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zh8ld" event={"ID":"43724cfc-11f7-4ded-9561-4bde1020015f","Type":"ContainerStarted","Data":"8cd09ee0b6e657488e2aba6443c03994d9742b975270de0bf53057140007c0f5"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.306801 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zh8ld" event={"ID":"43724cfc-11f7-4ded-9561-4bde1020015f","Type":"ContainerStarted","Data":"3a8cfbe98dba995e80cdee4a056c92ea86d612522152e62c77b307caa6775c7a"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.312102 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vfjq6" event={"ID":"c8f7b5d4-29b4-4741-9bcf-a993dbbce575","Type":"ContainerStarted","Data":"6c9b07fded761bfe8c522ce99eb9868da841a9fe8888e977a9db8092456d2992"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.312595 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314143 4758 generic.go:334] "Generic (PLEG): container finished" podID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerID="863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" exitCode=0 Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314173 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdv5w" event={"ID":"94969b49-36b1-4fcf-9fb3-54b56db5b8f6","Type":"ContainerDied","Data":"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314190 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mdv5w" event={"ID":"94969b49-36b1-4fcf-9fb3-54b56db5b8f6","Type":"ContainerDied","Data":"d531cb5ccc7016645c87fc3ee30629a2dbd9c42a9ab3709a5c8c3fdcb1e1c06f"} Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314206 4758 scope.go:117] "RemoveContainer" containerID="863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.314208 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mdv5w" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.330849 4758 scope.go:117] "RemoveContainer" containerID="863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" Jan 30 08:44:57 crc kubenswrapper[4758]: E0130 08:44:57.331568 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622\": container with ID starting with 863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622 not found: ID does not exist" containerID="863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.331679 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622"} err="failed to get container status \"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622\": rpc error: code = NotFound desc = could not find container \"863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622\": container with ID starting with 863cd6f2260d648c56c20519a390066564baa3e72dc7c9c9112dd6de4c2ba622 not found: ID does not exist" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.333697 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zh8ld" podStartSLOduration=2.250028483 podStartE2EDuration="2.333678631s" podCreationTimestamp="2026-01-30 08:44:55 +0000 UTC" firstStartedPulling="2026-01-30 08:44:56.656193857 +0000 UTC m=+901.628505408" lastFinishedPulling="2026-01-30 08:44:56.739844005 +0000 UTC m=+901.712155556" observedRunningTime="2026-01-30 08:44:57.324620757 +0000 UTC m=+902.296932328" watchObservedRunningTime="2026-01-30 08:44:57.333678631 +0000 UTC m=+902.305990182" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.359890 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-vfjq6" podStartSLOduration=8.709285519 podStartE2EDuration="21.359866835s" podCreationTimestamp="2026-01-30 08:44:36 +0000 UTC" firstStartedPulling="2026-01-30 08:44:37.251165866 +0000 UTC m=+882.223477427" lastFinishedPulling="2026-01-30 08:44:49.901747192 +0000 UTC m=+894.874058743" observedRunningTime="2026-01-30 08:44:57.356553 +0000 UTC m=+902.328864571" watchObservedRunningTime="2026-01-30 08:44:57.359866835 +0000 UTC m=+902.332178386" Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.371589 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.376499 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mdv5w"] Jan 30 08:44:57 crc kubenswrapper[4758]: I0130 08:44:57.776420 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" path="/var/lib/kubelet/pods/94969b49-36b1-4fcf-9fb3-54b56db5b8f6/volumes" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.147696 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 08:45:00 crc kubenswrapper[4758]: E0130 08:45:00.148196 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.148208 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.148344 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="94969b49-36b1-4fcf-9fb3-54b56db5b8f6" containerName="registry-server" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.148720 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.150499 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.150922 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.166082 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.268200 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.268280 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.268316 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.369500 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.369564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.369606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.371183 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.375309 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.388593 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") pod \"collect-profiles-29496045-qgd87\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.465517 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:00 crc kubenswrapper[4758]: I0130 08:45:00.686141 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 08:45:01 crc kubenswrapper[4758]: I0130 08:45:01.341023 4758 generic.go:334] "Generic (PLEG): container finished" podID="76da59ec-7916-4d09-8154-61e9848aaec6" containerID="60e0a5bbfedb1bfda46d2d92720410ff62ddecd2757f775d7598cb6c8cd22199" exitCode=0 Jan 30 08:45:01 crc kubenswrapper[4758]: I0130 08:45:01.341154 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" event={"ID":"76da59ec-7916-4d09-8154-61e9848aaec6","Type":"ContainerDied","Data":"60e0a5bbfedb1bfda46d2d92720410ff62ddecd2757f775d7598cb6c8cd22199"} Jan 30 08:45:01 crc kubenswrapper[4758]: I0130 08:45:01.341412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" event={"ID":"76da59ec-7916-4d09-8154-61e9848aaec6","Type":"ContainerStarted","Data":"39703a042eb17c46b43218f5906dda7812e648cdf0181a6b4dcd2bdae2a32c58"} Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.155747 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.158832 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.205716 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-vfjq6" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.605675 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.803196 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") pod \"76da59ec-7916-4d09-8154-61e9848aaec6\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.803306 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") pod \"76da59ec-7916-4d09-8154-61e9848aaec6\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.803382 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") pod \"76da59ec-7916-4d09-8154-61e9848aaec6\" (UID: \"76da59ec-7916-4d09-8154-61e9848aaec6\") " Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.804225 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume" (OuterVolumeSpecName: "config-volume") pod "76da59ec-7916-4d09-8154-61e9848aaec6" (UID: "76da59ec-7916-4d09-8154-61e9848aaec6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.810157 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76da59ec-7916-4d09-8154-61e9848aaec6" (UID: "76da59ec-7916-4d09-8154-61e9848aaec6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.812438 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972" (OuterVolumeSpecName: "kube-api-access-cf972") pod "76da59ec-7916-4d09-8154-61e9848aaec6" (UID: "76da59ec-7916-4d09-8154-61e9848aaec6"). InnerVolumeSpecName "kube-api-access-cf972". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.905114 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76da59ec-7916-4d09-8154-61e9848aaec6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.905163 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76da59ec-7916-4d09-8154-61e9848aaec6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:02 crc kubenswrapper[4758]: I0130 08:45:02.905177 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf972\" (UniqueName: \"kubernetes.io/projected/76da59ec-7916-4d09-8154-61e9848aaec6-kube-api-access-cf972\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:03 crc kubenswrapper[4758]: I0130 08:45:03.353920 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" event={"ID":"76da59ec-7916-4d09-8154-61e9848aaec6","Type":"ContainerDied","Data":"39703a042eb17c46b43218f5906dda7812e648cdf0181a6b4dcd2bdae2a32c58"} Jan 30 08:45:03 crc kubenswrapper[4758]: I0130 08:45:03.353959 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39703a042eb17c46b43218f5906dda7812e648cdf0181a6b4dcd2bdae2a32c58" Jan 30 08:45:03 crc kubenswrapper[4758]: I0130 08:45:03.353954 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87" Jan 30 08:45:05 crc kubenswrapper[4758]: I0130 08:45:05.933699 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:45:05 crc kubenswrapper[4758]: I0130 08:45:05.934293 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:45:05 crc kubenswrapper[4758]: I0130 08:45:05.957709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:45:06 crc kubenswrapper[4758]: I0130 08:45:06.392697 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zh8ld" Jan 30 08:45:06 crc kubenswrapper[4758]: I0130 08:45:06.574818 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kjhjn" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.650028 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56"] Jan 30 08:45:07 crc kubenswrapper[4758]: E0130 08:45:07.650871 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76da59ec-7916-4d09-8154-61e9848aaec6" containerName="collect-profiles" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.650885 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="76da59ec-7916-4d09-8154-61e9848aaec6" containerName="collect-profiles" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.651141 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="76da59ec-7916-4d09-8154-61e9848aaec6" containerName="collect-profiles" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.652707 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.662930 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-94mbk" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.673759 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56"] Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.674146 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.674221 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.674286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.775777 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.775860 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.776314 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.776458 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.776854 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.802162 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") pod \"43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:07 crc kubenswrapper[4758]: I0130 08:45:07.990818 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:08 crc kubenswrapper[4758]: I0130 08:45:08.377875 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56"] Jan 30 08:45:09 crc kubenswrapper[4758]: I0130 08:45:09.386666 4758 generic.go:334] "Generic (PLEG): container finished" podID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerID="6b78a0c138800ecd404ecb28d5f29d2e973073cee76f846bb827d92ea4b8d4cb" exitCode=0 Jan 30 08:45:09 crc kubenswrapper[4758]: I0130 08:45:09.386703 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerDied","Data":"6b78a0c138800ecd404ecb28d5f29d2e973073cee76f846bb827d92ea4b8d4cb"} Jan 30 08:45:09 crc kubenswrapper[4758]: I0130 08:45:09.388595 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerStarted","Data":"3a2c8d67fbd967c8747b465b2a1ae35a5a44b6a7d8b4c83de875fef0f6929358"} Jan 30 08:45:10 crc kubenswrapper[4758]: I0130 08:45:10.395580 4758 generic.go:334] "Generic (PLEG): container finished" podID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerID="7310ecfc56b2a45dac3366f9f23b18b9a374aef78b95cf199011eaedbed8e93f" exitCode=0 Jan 30 08:45:10 crc kubenswrapper[4758]: I0130 08:45:10.395788 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerDied","Data":"7310ecfc56b2a45dac3366f9f23b18b9a374aef78b95cf199011eaedbed8e93f"} Jan 30 08:45:11 crc kubenswrapper[4758]: I0130 08:45:11.405500 4758 generic.go:334] "Generic (PLEG): container finished" podID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerID="5b20749419c0ea927a0528dc7b0613d70bdb32031042086668807bc5313bb778" exitCode=0 Jan 30 08:45:11 crc kubenswrapper[4758]: I0130 08:45:11.405543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerDied","Data":"5b20749419c0ea927a0528dc7b0613d70bdb32031042086668807bc5313bb778"} Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.651859 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.840664 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") pod \"51f6834f-53ed-44f6-ba73-fc7275fcb395\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.840717 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") pod \"51f6834f-53ed-44f6-ba73-fc7275fcb395\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.840794 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") pod \"51f6834f-53ed-44f6-ba73-fc7275fcb395\" (UID: \"51f6834f-53ed-44f6-ba73-fc7275fcb395\") " Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.841771 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle" (OuterVolumeSpecName: "bundle") pod "51f6834f-53ed-44f6-ba73-fc7275fcb395" (UID: "51f6834f-53ed-44f6-ba73-fc7275fcb395"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.855326 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx" (OuterVolumeSpecName: "kube-api-access-gwvkx") pod "51f6834f-53ed-44f6-ba73-fc7275fcb395" (UID: "51f6834f-53ed-44f6-ba73-fc7275fcb395"). InnerVolumeSpecName "kube-api-access-gwvkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.857897 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util" (OuterVolumeSpecName: "util") pod "51f6834f-53ed-44f6-ba73-fc7275fcb395" (UID: "51f6834f-53ed-44f6-ba73-fc7275fcb395"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.942349 4758 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-util\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.942384 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwvkx\" (UniqueName: \"kubernetes.io/projected/51f6834f-53ed-44f6-ba73-fc7275fcb395-kube-api-access-gwvkx\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:12 crc kubenswrapper[4758]: I0130 08:45:12.942399 4758 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51f6834f-53ed-44f6-ba73-fc7275fcb395-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:13 crc kubenswrapper[4758]: I0130 08:45:13.421582 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" event={"ID":"51f6834f-53ed-44f6-ba73-fc7275fcb395","Type":"ContainerDied","Data":"3a2c8d67fbd967c8747b465b2a1ae35a5a44b6a7d8b4c83de875fef0f6929358"} Jan 30 08:45:13 crc kubenswrapper[4758]: I0130 08:45:13.421621 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56" Jan 30 08:45:13 crc kubenswrapper[4758]: I0130 08:45:13.421635 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a2c8d67fbd967c8747b465b2a1ae35a5a44b6a7d8b4c83de875fef0f6929358" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.232124 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj"] Jan 30 08:45:15 crc kubenswrapper[4758]: E0130 08:45:15.233593 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="util" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.233668 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="util" Jan 30 08:45:15 crc kubenswrapper[4758]: E0130 08:45:15.233729 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="pull" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.233786 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="pull" Jan 30 08:45:15 crc kubenswrapper[4758]: E0130 08:45:15.233859 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="extract" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.233914 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="extract" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.234097 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f6834f-53ed-44f6-ba73-fc7275fcb395" containerName="extract" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.234608 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.236930 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cn2kb" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.271957 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj"] Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.371266 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b46sd\" (UniqueName: \"kubernetes.io/projected/10f9b3c9-c691-403e-801f-420bc2701a95-kube-api-access-b46sd\") pod \"openstack-operator-controller-init-744b85dfd5-tlqzj\" (UID: \"10f9b3c9-c691-403e-801f-420bc2701a95\") " pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.472155 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b46sd\" (UniqueName: \"kubernetes.io/projected/10f9b3c9-c691-403e-801f-420bc2701a95-kube-api-access-b46sd\") pod \"openstack-operator-controller-init-744b85dfd5-tlqzj\" (UID: \"10f9b3c9-c691-403e-801f-420bc2701a95\") " pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.492060 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b46sd\" (UniqueName: \"kubernetes.io/projected/10f9b3c9-c691-403e-801f-420bc2701a95-kube-api-access-b46sd\") pod \"openstack-operator-controller-init-744b85dfd5-tlqzj\" (UID: \"10f9b3c9-c691-403e-801f-420bc2701a95\") " pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:15 crc kubenswrapper[4758]: I0130 08:45:15.554220 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:16 crc kubenswrapper[4758]: I0130 08:45:16.088694 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj"] Jan 30 08:45:16 crc kubenswrapper[4758]: I0130 08:45:16.451216 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" event={"ID":"10f9b3c9-c691-403e-801f-420bc2701a95","Type":"ContainerStarted","Data":"968aa089babba809e58ff54d68160c8e9636488db2cab397ece41f90e6e28500"} Jan 30 08:45:22 crc kubenswrapper[4758]: I0130 08:45:22.387247 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:45:22 crc kubenswrapper[4758]: I0130 08:45:22.387845 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:45:23 crc kubenswrapper[4758]: I0130 08:45:23.515899 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" event={"ID":"10f9b3c9-c691-403e-801f-420bc2701a95","Type":"ContainerStarted","Data":"5f4242a3c6b9c42575f0f215a6d7484a648d1ad6376bba08b785a13514c7f1c3"} Jan 30 08:45:23 crc kubenswrapper[4758]: I0130 08:45:23.516241 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:23 crc kubenswrapper[4758]: I0130 08:45:23.546250 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" podStartSLOduration=1.7005568119999999 podStartE2EDuration="8.546230595s" podCreationTimestamp="2026-01-30 08:45:15 +0000 UTC" firstStartedPulling="2026-01-30 08:45:16.099163699 +0000 UTC m=+921.071475250" lastFinishedPulling="2026-01-30 08:45:22.944837482 +0000 UTC m=+927.917149033" observedRunningTime="2026-01-30 08:45:23.541581859 +0000 UTC m=+928.513893410" watchObservedRunningTime="2026-01-30 08:45:23.546230595 +0000 UTC m=+928.518542156" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.262576 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.264228 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.282218 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.397772 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.397898 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.397976 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.499319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.499561 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.499690 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.500010 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.500229 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.531009 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") pod \"community-operators-2t2jt\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:32 crc kubenswrapper[4758]: I0130 08:45:32.579897 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:33 crc kubenswrapper[4758]: I0130 08:45:33.230205 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:33 crc kubenswrapper[4758]: W0130 08:45:33.235110 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod364bc201_959b_4237_a379_9596b1223cbc.slice/crio-4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d WatchSource:0}: Error finding container 4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d: Status 404 returned error can't find the container with id 4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d Jan 30 08:45:33 crc kubenswrapper[4758]: I0130 08:45:33.574703 4758 generic.go:334] "Generic (PLEG): container finished" podID="364bc201-959b-4237-a379-9596b1223cbc" containerID="69a7ca6a35440c8d6a6b5c64641df205b1f58c73899c4a6f6ad67e7fef9baab8" exitCode=0 Jan 30 08:45:33 crc kubenswrapper[4758]: I0130 08:45:33.574759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerDied","Data":"69a7ca6a35440c8d6a6b5c64641df205b1f58c73899c4a6f6ad67e7fef9baab8"} Jan 30 08:45:33 crc kubenswrapper[4758]: I0130 08:45:33.574798 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerStarted","Data":"4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d"} Jan 30 08:45:35 crc kubenswrapper[4758]: I0130 08:45:35.557023 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" Jan 30 08:45:35 crc kubenswrapper[4758]: I0130 08:45:35.589018 4758 generic.go:334] "Generic (PLEG): container finished" podID="364bc201-959b-4237-a379-9596b1223cbc" containerID="4ac71eee89e7259c48423a040dbed7ad9542787e6ed5a91e7cacd0c60611852e" exitCode=0 Jan 30 08:45:35 crc kubenswrapper[4758]: I0130 08:45:35.589081 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerDied","Data":"4ac71eee89e7259c48423a040dbed7ad9542787e6ed5a91e7cacd0c60611852e"} Jan 30 08:45:36 crc kubenswrapper[4758]: I0130 08:45:36.596812 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerStarted","Data":"cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce"} Jan 30 08:45:36 crc kubenswrapper[4758]: I0130 08:45:36.617427 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2t2jt" podStartSLOduration=2.014991932 podStartE2EDuration="4.617408289s" podCreationTimestamp="2026-01-30 08:45:32 +0000 UTC" firstStartedPulling="2026-01-30 08:45:33.57610463 +0000 UTC m=+938.548416181" lastFinishedPulling="2026-01-30 08:45:36.178520977 +0000 UTC m=+941.150832538" observedRunningTime="2026-01-30 08:45:36.614006161 +0000 UTC m=+941.586317712" watchObservedRunningTime="2026-01-30 08:45:36.617408289 +0000 UTC m=+941.589719840" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.580984 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.581545 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.661390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.796208 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:42 crc kubenswrapper[4758]: I0130 08:45:42.986252 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:44 crc kubenswrapper[4758]: I0130 08:45:44.710559 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2t2jt" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="registry-server" containerID="cri-o://cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce" gracePeriod=2 Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.723143 4758 generic.go:334] "Generic (PLEG): container finished" podID="364bc201-959b-4237-a379-9596b1223cbc" containerID="cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce" exitCode=0 Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.723247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerDied","Data":"cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce"} Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.723555 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2t2jt" event={"ID":"364bc201-959b-4237-a379-9596b1223cbc","Type":"ContainerDied","Data":"4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d"} Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.723577 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eeb4bd6773839ff5fce1d2a6d8d8751bd0f78cc64649777cb14023c1c62762d" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.745381 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.820599 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") pod \"364bc201-959b-4237-a379-9596b1223cbc\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.820677 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") pod \"364bc201-959b-4237-a379-9596b1223cbc\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.820863 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") pod \"364bc201-959b-4237-a379-9596b1223cbc\" (UID: \"364bc201-959b-4237-a379-9596b1223cbc\") " Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.826244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities" (OuterVolumeSpecName: "utilities") pod "364bc201-959b-4237-a379-9596b1223cbc" (UID: "364bc201-959b-4237-a379-9596b1223cbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.860480 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2" (OuterVolumeSpecName: "kube-api-access-xxbd2") pod "364bc201-959b-4237-a379-9596b1223cbc" (UID: "364bc201-959b-4237-a379-9596b1223cbc"). InnerVolumeSpecName "kube-api-access-xxbd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.896253 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "364bc201-959b-4237-a379-9596b1223cbc" (UID: "364bc201-959b-4237-a379-9596b1223cbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.935907 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.935953 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxbd2\" (UniqueName: \"kubernetes.io/projected/364bc201-959b-4237-a379-9596b1223cbc-kube-api-access-xxbd2\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:45 crc kubenswrapper[4758]: I0130 08:45:45.935968 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364bc201-959b-4237-a379-9596b1223cbc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:45:46 crc kubenswrapper[4758]: I0130 08:45:46.728178 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2t2jt" Jan 30 08:45:46 crc kubenswrapper[4758]: I0130 08:45:46.757377 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:46 crc kubenswrapper[4758]: I0130 08:45:46.761684 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2t2jt"] Jan 30 08:45:47 crc kubenswrapper[4758]: I0130 08:45:47.774438 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364bc201-959b-4237-a379-9596b1223cbc" path="/var/lib/kubelet/pods/364bc201-959b-4237-a379-9596b1223cbc/volumes" Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.387296 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.387357 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.387421 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.388073 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.388138 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6" gracePeriod=600 Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.786440 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6" exitCode=0 Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.786722 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6"} Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.786747 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2"} Jan 30 08:45:52 crc kubenswrapper[4758]: I0130 08:45:52.786762 4758 scope.go:117] "RemoveContainer" containerID="deb29cef8b897137735d2465c08026013481066ac7d08e4c83f4d9efbbed9a89" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852140 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq"] Jan 30 08:45:53 crc kubenswrapper[4758]: E0130 08:45:53.852648 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="extract-content" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852660 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="extract-content" Jan 30 08:45:53 crc kubenswrapper[4758]: E0130 08:45:53.852670 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="extract-utilities" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852677 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="extract-utilities" Jan 30 08:45:53 crc kubenswrapper[4758]: E0130 08:45:53.852692 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="registry-server" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852699 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="registry-server" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.852809 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="364bc201-959b-4237-a379-9596b1223cbc" containerName="registry-server" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.853223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.861083 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.861852 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.864096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-st9mx" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.873487 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xnjg2" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.879914 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.880577 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.883573 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r4nwl" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.895893 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.906514 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.922139 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.922965 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.925954 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.932691 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kggjc\" (UniqueName: \"kubernetes.io/projected/1c4d1258-0416-49d0-a3a5-6ece70dc0c46-kube-api-access-kggjc\") pod \"barbican-operator-controller-manager-566c8844c5-6nj4p\" (UID: \"1c4d1258-0416-49d0-a3a5-6ece70dc0c46\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.932819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-486cf\" (UniqueName: \"kubernetes.io/projected/62b3bb0d-894a-4cb1-b644-d42f3cba98d7-kube-api-access-486cf\") pod \"cinder-operator-controller-manager-5f9bbdc844-6cgsq\" (UID: \"62b3bb0d-894a-4cb1-b644-d42f3cba98d7\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.933309 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kbt6n" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.948296 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.977091 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-jct4c"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.977911 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.981331 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xldcc" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.982492 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9"] Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.983532 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:53 crc kubenswrapper[4758]: I0130 08:45:53.984959 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xczqj" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.007103 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.010839 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-jct4c"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.033867 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zck\" (UniqueName: \"kubernetes.io/projected/dc189df6-25bc-4d6e-aa30-05ce0db12721-kube-api-access-s7zck\") pod \"designate-operator-controller-manager-8f4c5cb64-cp9km\" (UID: \"dc189df6-25bc-4d6e-aa30-05ce0db12721\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.033940 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kggjc\" (UniqueName: \"kubernetes.io/projected/1c4d1258-0416-49d0-a3a5-6ece70dc0c46-kube-api-access-kggjc\") pod \"barbican-operator-controller-manager-566c8844c5-6nj4p\" (UID: \"1c4d1258-0416-49d0-a3a5-6ece70dc0c46\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.034069 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97nc\" (UniqueName: \"kubernetes.io/projected/9579fc9d-6eae-4249-ac43-35144ed58bed-kube-api-access-s97nc\") pod \"heat-operator-controller-manager-54985f5875-jct4c\" (UID: \"9579fc9d-6eae-4249-ac43-35144ed58bed\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.034101 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-486cf\" (UniqueName: \"kubernetes.io/projected/62b3bb0d-894a-4cb1-b644-d42f3cba98d7-kube-api-access-486cf\") pod \"cinder-operator-controller-manager-5f9bbdc844-6cgsq\" (UID: \"62b3bb0d-894a-4cb1-b644-d42f3cba98d7\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.034125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mknpj\" (UniqueName: \"kubernetes.io/projected/fe68673c-8979-46ee-a4aa-f95bcd7b4e8a-kube-api-access-mknpj\") pod \"glance-operator-controller-manager-784f59d4f4-sw42x\" (UID: \"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.076089 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-5k6df"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.076972 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.087880 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.088153 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-mxwhp" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.103906 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-486cf\" (UniqueName: \"kubernetes.io/projected/62b3bb0d-894a-4cb1-b644-d42f3cba98d7-kube-api-access-486cf\") pod \"cinder-operator-controller-manager-5f9bbdc844-6cgsq\" (UID: \"62b3bb0d-894a-4cb1-b644-d42f3cba98d7\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.106288 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.110794 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.123567 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zdbjv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.136417 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137687 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9mk\" (UniqueName: \"kubernetes.io/projected/ef31968c-db2e-4083-a08f-19a8daf0ac2d-kube-api-access-2b9mk\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137840 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s97nc\" (UniqueName: \"kubernetes.io/projected/9579fc9d-6eae-4249-ac43-35144ed58bed-kube-api-access-s97nc\") pod \"heat-operator-controller-manager-54985f5875-jct4c\" (UID: \"9579fc9d-6eae-4249-ac43-35144ed58bed\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137881 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mknpj\" (UniqueName: \"kubernetes.io/projected/fe68673c-8979-46ee-a4aa-f95bcd7b4e8a-kube-api-access-mknpj\") pod \"glance-operator-controller-manager-784f59d4f4-sw42x\" (UID: \"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137910 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlcl4\" (UniqueName: \"kubernetes.io/projected/b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2-kube-api-access-hlcl4\") pod \"horizon-operator-controller-manager-5fb775575f-vjdn9\" (UID: \"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137942 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.137968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zck\" (UniqueName: \"kubernetes.io/projected/dc189df6-25bc-4d6e-aa30-05ce0db12721-kube-api-access-s7zck\") pod \"designate-operator-controller-manager-8f4c5cb64-cp9km\" (UID: \"dc189df6-25bc-4d6e-aa30-05ce0db12721\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.140720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kggjc\" (UniqueName: \"kubernetes.io/projected/1c4d1258-0416-49d0-a3a5-6ece70dc0c46-kube-api-access-kggjc\") pod \"barbican-operator-controller-manager-566c8844c5-6nj4p\" (UID: \"1c4d1258-0416-49d0-a3a5-6ece70dc0c46\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.156579 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-5k6df"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.168630 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.169340 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.169398 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.175279 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6k9xw" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.178096 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.191436 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.197679 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.198854 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.212735 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mknpj\" (UniqueName: \"kubernetes.io/projected/fe68673c-8979-46ee-a4aa-f95bcd7b4e8a-kube-api-access-mknpj\") pod \"glance-operator-controller-manager-784f59d4f4-sw42x\" (UID: \"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.222663 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wn9f9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.234616 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zck\" (UniqueName: \"kubernetes.io/projected/dc189df6-25bc-4d6e-aa30-05ce0db12721-kube-api-access-s7zck\") pod \"designate-operator-controller-manager-8f4c5cb64-cp9km\" (UID: \"dc189df6-25bc-4d6e-aa30-05ce0db12721\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.236326 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s97nc\" (UniqueName: \"kubernetes.io/projected/9579fc9d-6eae-4249-ac43-35144ed58bed-kube-api-access-s97nc\") pod \"heat-operator-controller-manager-54985f5875-jct4c\" (UID: \"9579fc9d-6eae-4249-ac43-35144ed58bed\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.237266 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.237988 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238708 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlcl4\" (UniqueName: \"kubernetes.io/projected/b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2-kube-api-access-hlcl4\") pod \"horizon-operator-controller-manager-5fb775575f-vjdn9\" (UID: \"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238758 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238794 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22cl2\" (UniqueName: \"kubernetes.io/projected/5c2a7d2b-62a1-468b-a3b3-fe77698a41a2-kube-api-access-22cl2\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-4jmpb\" (UID: \"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238820 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b9mk\" (UniqueName: \"kubernetes.io/projected/ef31968c-db2e-4083-a08f-19a8daf0ac2d-kube-api-access-2b9mk\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.238841 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfdx\" (UniqueName: \"kubernetes.io/projected/b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7-kube-api-access-4tfdx\") pod \"keystone-operator-controller-manager-6c9d56f9bd-h89d9\" (UID: \"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.239206 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.239252 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:45:54.739236335 +0000 UTC m=+959.711547886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.256504 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.257231 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cqxpc" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.290187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.311853 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b9mk\" (UniqueName: \"kubernetes.io/projected/ef31968c-db2e-4083-a08f-19a8daf0ac2d-kube-api-access-2b9mk\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.324264 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.337282 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.347749 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g942j\" (UniqueName: \"kubernetes.io/projected/ac7c91ce-d4d9-4754-9828-a43140218228-kube-api-access-g942j\") pod \"manila-operator-controller-manager-74954f9f78-kxmjn\" (UID: \"ac7c91ce-d4d9-4754-9828-a43140218228\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.347853 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44wzb\" (UniqueName: \"kubernetes.io/projected/0565da5c-02e0-409f-b801-e06c3e79ef47-kube-api-access-44wzb\") pod \"mariadb-operator-controller-manager-67bf948998-c5d9k\" (UID: \"0565da5c-02e0-409f-b801-e06c3e79ef47\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.347877 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22cl2\" (UniqueName: \"kubernetes.io/projected/5c2a7d2b-62a1-468b-a3b3-fe77698a41a2-kube-api-access-22cl2\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-4jmpb\" (UID: \"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.347911 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfdx\" (UniqueName: \"kubernetes.io/projected/b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7-kube-api-access-4tfdx\") pod \"keystone-operator-controller-manager-6c9d56f9bd-h89d9\" (UID: \"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.359357 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlcl4\" (UniqueName: \"kubernetes.io/projected/b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2-kube-api-access-hlcl4\") pod \"horizon-operator-controller-manager-5fb775575f-vjdn9\" (UID: \"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.365905 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.366785 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.375502 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-w6x8z" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.396375 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfdx\" (UniqueName: \"kubernetes.io/projected/b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7-kube-api-access-4tfdx\") pod \"keystone-operator-controller-manager-6c9d56f9bd-h89d9\" (UID: \"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.403977 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22cl2\" (UniqueName: \"kubernetes.io/projected/5c2a7d2b-62a1-468b-a3b3-fe77698a41a2-kube-api-access-22cl2\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-4jmpb\" (UID: \"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.419710 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.420759 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.467686 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44wzb\" (UniqueName: \"kubernetes.io/projected/0565da5c-02e0-409f-b801-e06c3e79ef47-kube-api-access-44wzb\") pod \"mariadb-operator-controller-manager-67bf948998-c5d9k\" (UID: \"0565da5c-02e0-409f-b801-e06c3e79ef47\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.468026 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g942j\" (UniqueName: \"kubernetes.io/projected/ac7c91ce-d4d9-4754-9828-a43140218228-kube-api-access-g942j\") pod \"manila-operator-controller-manager-74954f9f78-kxmjn\" (UID: \"ac7c91ce-d4d9-4754-9828-a43140218228\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.470449 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.473486 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrlm9\" (UniqueName: \"kubernetes.io/projected/b6d614d3-1ced-4b27-bd91-8edd410e5fc5-kube-api-access-qrlm9\") pod \"neutron-operator-controller-manager-6cfc4f6754-7s6n2\" (UID: \"b6d614d3-1ced-4b27-bd91-8edd410e5fc5\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.482230 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.487409 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rc2gx" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.512898 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.576970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g942j\" (UniqueName: \"kubernetes.io/projected/ac7c91ce-d4d9-4754-9828-a43140218228-kube-api-access-g942j\") pod \"manila-operator-controller-manager-74954f9f78-kxmjn\" (UID: \"ac7c91ce-d4d9-4754-9828-a43140218228\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.586780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrlm9\" (UniqueName: \"kubernetes.io/projected/b6d614d3-1ced-4b27-bd91-8edd410e5fc5-kube-api-access-qrlm9\") pod \"neutron-operator-controller-manager-6cfc4f6754-7s6n2\" (UID: \"b6d614d3-1ced-4b27-bd91-8edd410e5fc5\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.596736 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsjts\" (UniqueName: \"kubernetes.io/projected/0b006f91-5b27-4342-935b-c7a7f174c03b-kube-api-access-wsjts\") pod \"nova-operator-controller-manager-67f5956bc9-hp2mv\" (UID: \"0b006f91-5b27-4342-935b-c7a7f174c03b\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.591934 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44wzb\" (UniqueName: \"kubernetes.io/projected/0565da5c-02e0-409f-b801-e06c3e79ef47-kube-api-access-44wzb\") pod \"mariadb-operator-controller-manager-67bf948998-c5d9k\" (UID: \"0565da5c-02e0-409f-b801-e06c3e79ef47\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.597053 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.605006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.632748 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.635115 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.636018 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.648120 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.658424 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.660781 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.660897 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.665673 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrlm9\" (UniqueName: \"kubernetes.io/projected/b6d614d3-1ced-4b27-bd91-8edd410e5fc5-kube-api-access-qrlm9\") pod \"neutron-operator-controller-manager-6cfc4f6754-7s6n2\" (UID: \"b6d614d3-1ced-4b27-bd91-8edd410e5fc5\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.669118 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.670482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ptnjs" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.676629 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mj5td" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.677166 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.677725 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.678600 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.682474 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-8jh4b" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.697976 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.699115 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.699738 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsjts\" (UniqueName: \"kubernetes.io/projected/0b006f91-5b27-4342-935b-c7a7f174c03b-kube-api-access-wsjts\") pod \"nova-operator-controller-manager-67f5956bc9-hp2mv\" (UID: \"0b006f91-5b27-4342-935b-c7a7f174c03b\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.705962 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.717077 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.718343 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.724858 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dz8rk" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.732944 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.733848 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.736766 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4v29k" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.743788 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsjts\" (UniqueName: \"kubernetes.io/projected/0b006f91-5b27-4342-935b-c7a7f174c03b-kube-api-access-wsjts\") pod \"nova-operator-controller-manager-67f5956bc9-hp2mv\" (UID: \"0b006f91-5b27-4342-935b-c7a7f174c03b\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.744791 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.745770 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.755139 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-24gx9" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.760481 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.773510 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.795917 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802146 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhgb\" (UniqueName: \"kubernetes.io/projected/8cb0c6cc-e254-4dae-b433-397504fba6dc-kube-api-access-2zhgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802210 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v898l\" (UniqueName: \"kubernetes.io/projected/a104527b-98dc-4120-91b5-6e7e9466b9a3-kube-api-access-v898l\") pod \"octavia-operator-controller-manager-694c6dcf95-kdnzv\" (UID: \"a104527b-98dc-4120-91b5-6e7e9466b9a3\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802237 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802301 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf6cb\" (UniqueName: \"kubernetes.io/projected/03b807f9-cd53-4189-b7df-09c5ea5fdf53-kube-api-access-lf6cb\") pod \"ovn-operator-controller-manager-788c46999f-qlc8l\" (UID: \"03b807f9-cd53-4189-b7df-09c5ea5fdf53\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.802320 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.805335 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.805419 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:45:55.805381637 +0000 UTC m=+960.777693188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.806477 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.810129 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.811434 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.820235 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gc64n" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.839109 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.840174 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.861616 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-phszr" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.903079 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4"] Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.917860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qkvz\" (UniqueName: \"kubernetes.io/projected/b271df00-e9f2-4c58-94e7-22ea4b7d7eaf-kube-api-access-8qkvz\") pod \"telemetry-operator-controller-manager-76cd99594-xhszj\" (UID: \"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.917923 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhgb\" (UniqueName: \"kubernetes.io/projected/8cb0c6cc-e254-4dae-b433-397504fba6dc-kube-api-access-2zhgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.917971 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v898l\" (UniqueName: \"kubernetes.io/projected/a104527b-98dc-4120-91b5-6e7e9466b9a3-kube-api-access-v898l\") pod \"octavia-operator-controller-manager-694c6dcf95-kdnzv\" (UID: \"a104527b-98dc-4120-91b5-6e7e9466b9a3\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918013 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj6h\" (UniqueName: \"kubernetes.io/projected/e4787454-8070-449e-a7d0-2ff179eaaff3-kube-api-access-ctj6h\") pod \"swift-operator-controller-manager-7d4f9d9c9b-t7dht\" (UID: \"e4787454-8070-449e-a7d0-2ff179eaaff3\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918059 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grlxx\" (UniqueName: \"kubernetes.io/projected/178c1ff9-1a2a-4c4a-8258-89c267a5d0aa-kube-api-access-grlxx\") pod \"placement-operator-controller-manager-5b964cf4cd-vzcpf\" (UID: \"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918077 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9sw\" (UniqueName: \"kubernetes.io/projected/fe039ec9-aaec-4e17-8eac-c7719245ba4d-kube-api-access-6k9sw\") pod \"test-operator-controller-manager-56f8bfcd9f-pmvv4\" (UID: \"fe039ec9-aaec-4e17-8eac-c7719245ba4d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918107 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf6cb\" (UniqueName: \"kubernetes.io/projected/03b807f9-cd53-4189-b7df-09c5ea5fdf53-kube-api-access-lf6cb\") pod \"ovn-operator-controller-manager-788c46999f-qlc8l\" (UID: \"03b807f9-cd53-4189-b7df-09c5ea5fdf53\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.918123 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.918246 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: E0130 08:45:54.918287 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:45:55.418273735 +0000 UTC m=+960.390585286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:54 crc kubenswrapper[4758]: I0130 08:45:54.925773 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc"] Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020264 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cm6c\" (UniqueName: \"kubernetes.io/projected/73790ffa-61b1-489c-94c9-3934af94185f-kube-api-access-7cm6c\") pod \"watcher-operator-controller-manager-5bf648c946-ngzmc\" (UID: \"73790ffa-61b1-489c-94c9-3934af94185f\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020331 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grlxx\" (UniqueName: \"kubernetes.io/projected/178c1ff9-1a2a-4c4a-8258-89c267a5d0aa-kube-api-access-grlxx\") pod \"placement-operator-controller-manager-5b964cf4cd-vzcpf\" (UID: \"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020374 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k9sw\" (UniqueName: \"kubernetes.io/projected/fe039ec9-aaec-4e17-8eac-c7719245ba4d-kube-api-access-6k9sw\") pod \"test-operator-controller-manager-56f8bfcd9f-pmvv4\" (UID: \"fe039ec9-aaec-4e17-8eac-c7719245ba4d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020452 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qkvz\" (UniqueName: \"kubernetes.io/projected/b271df00-e9f2-4c58-94e7-22ea4b7d7eaf-kube-api-access-8qkvz\") pod \"telemetry-operator-controller-manager-76cd99594-xhszj\" (UID: \"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.020560 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctj6h\" (UniqueName: \"kubernetes.io/projected/e4787454-8070-449e-a7d0-2ff179eaaff3-kube-api-access-ctj6h\") pod \"swift-operator-controller-manager-7d4f9d9c9b-t7dht\" (UID: \"e4787454-8070-449e-a7d0-2ff179eaaff3\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.024008 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhgb\" (UniqueName: \"kubernetes.io/projected/8cb0c6cc-e254-4dae-b433-397504fba6dc-kube-api-access-2zhgb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.070844 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grlxx\" (UniqueName: \"kubernetes.io/projected/178c1ff9-1a2a-4c4a-8258-89c267a5d0aa-kube-api-access-grlxx\") pod \"placement-operator-controller-manager-5b964cf4cd-vzcpf\" (UID: \"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.124631 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cm6c\" (UniqueName: \"kubernetes.io/projected/73790ffa-61b1-489c-94c9-3934af94185f-kube-api-access-7cm6c\") pod \"watcher-operator-controller-manager-5bf648c946-ngzmc\" (UID: \"73790ffa-61b1-489c-94c9-3934af94185f\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.147386 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qkvz\" (UniqueName: \"kubernetes.io/projected/b271df00-e9f2-4c58-94e7-22ea4b7d7eaf-kube-api-access-8qkvz\") pod \"telemetry-operator-controller-manager-76cd99594-xhszj\" (UID: \"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.172842 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.282127 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.301087 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k9sw\" (UniqueName: \"kubernetes.io/projected/fe039ec9-aaec-4e17-8eac-c7719245ba4d-kube-api-access-6k9sw\") pod \"test-operator-controller-manager-56f8bfcd9f-pmvv4\" (UID: \"fe039ec9-aaec-4e17-8eac-c7719245ba4d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.320956 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf6cb\" (UniqueName: \"kubernetes.io/projected/03b807f9-cd53-4189-b7df-09c5ea5fdf53-kube-api-access-lf6cb\") pod \"ovn-operator-controller-manager-788c46999f-qlc8l\" (UID: \"03b807f9-cd53-4189-b7df-09c5ea5fdf53\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.332828 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.376205 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr"] Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.377840 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.378836 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.412895 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v898l\" (UniqueName: \"kubernetes.io/projected/a104527b-98dc-4120-91b5-6e7e9466b9a3-kube-api-access-v898l\") pod \"octavia-operator-controller-manager-694c6dcf95-kdnzv\" (UID: \"a104527b-98dc-4120-91b5-6e7e9466b9a3\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.438024 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cm6c\" (UniqueName: \"kubernetes.io/projected/73790ffa-61b1-489c-94c9-3934af94185f-kube-api-access-7cm6c\") pod \"watcher-operator-controller-manager-5bf648c946-ngzmc\" (UID: \"73790ffa-61b1-489c-94c9-3934af94185f\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.438829 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rj47s" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.438978 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.439090 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.441192 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6kdn\" (UniqueName: \"kubernetes.io/projected/0c09af13-f67b-4306-9039-f02d5f9e2f53-kube-api-access-v6kdn\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.441279 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.441323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.441347 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: E0130 08:45:55.441491 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:55 crc kubenswrapper[4758]: E0130 08:45:55.441528 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:45:56.441514165 +0000 UTC m=+961.413825716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.461168 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr"] Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.462029 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctj6h\" (UniqueName: \"kubernetes.io/projected/e4787454-8070-449e-a7d0-2ff179eaaff3-kube-api-access-ctj6h\") pod \"swift-operator-controller-manager-7d4f9d9c9b-t7dht\" (UID: \"e4787454-8070-449e-a7d0-2ff179eaaff3\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.471987 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.647826 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.647900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.647948 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6kdn\" (UniqueName: \"kubernetes.io/projected/0c09af13-f67b-4306-9039-f02d5f9e2f53-kube-api-access-v6kdn\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:55 crc kubenswrapper[4758]: E0130 08:45:55.683787 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:45:55 crc kubenswrapper[4758]: E0130 08:45:55.683857 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:56.183835542 +0000 UTC m=+961.156147093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:45:55 crc kubenswrapper[4758]: I0130 08:45:55.693015 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.058980 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.081131 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.082181 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.082383 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:45:58.082367701 +0000 UTC m=+963.054679252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.139308 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.139391 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.140101 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:56.640079051 +0000 UTC m=+961.612390602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.564880 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.579191 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.568504 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.579525 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:45:58.57950588 +0000 UTC m=+963.551817431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.579628 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.579659 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:57.579649854 +0000 UTC m=+962.551961405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.595156 4758 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hvsqq container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.595217 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hvsqq" podUID="f452c53b-893b-4060-b573-595e98576792" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.635219 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-744b85dfd5-tlqzj" podUID="10f9b3c9-c691-403e-801f-420bc2701a95" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.680287 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.682231 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: E0130 08:45:56.682321 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:57.682302579 +0000 UTC m=+962.654614130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.800063 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6kdn\" (UniqueName: \"kubernetes.io/projected/0c09af13-f67b-4306-9039-f02d5f9e2f53-kube-api-access-v6kdn\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824172 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824848 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824861 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824869 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.824933 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.837245 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xbmrt" Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.844995 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq"] Jan 30 08:45:56 crc kubenswrapper[4758]: I0130 08:45:56.892285 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhjx\" (UniqueName: \"kubernetes.io/projected/fbb26261-18aa-4ba0-940e-788200175600-kube-api-access-lqhjx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gt4n2\" (UID: \"fbb26261-18aa-4ba0-940e-788200175600\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.003084 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhjx\" (UniqueName: \"kubernetes.io/projected/fbb26261-18aa-4ba0-940e-788200175600-kube-api-access-lqhjx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gt4n2\" (UID: \"fbb26261-18aa-4ba0-940e-788200175600\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:57 crc kubenswrapper[4758]: W0130 08:45:57.030445 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62b3bb0d_894a_4cb1_b644_d42f3cba98d7.slice/crio-0fdd92d57f884f0100f2b5de033300897f19a14dd12184eeb86260bd20480331 WatchSource:0}: Error finding container 0fdd92d57f884f0100f2b5de033300897f19a14dd12184eeb86260bd20480331: Status 404 returned error can't find the container with id 0fdd92d57f884f0100f2b5de033300897f19a14dd12184eeb86260bd20480331 Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.037973 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhjx\" (UniqueName: \"kubernetes.io/projected/fbb26261-18aa-4ba0-940e-788200175600-kube-api-access-lqhjx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gt4n2\" (UID: \"fbb26261-18aa-4ba0-940e-788200175600\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.277355 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.359551 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-jct4c"] Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.617632 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:57 crc kubenswrapper[4758]: E0130 08:45:57.617845 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:45:57 crc kubenswrapper[4758]: E0130 08:45:57.617892 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:59.617878364 +0000 UTC m=+964.590189915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.623067 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn"] Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.718611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:57 crc kubenswrapper[4758]: E0130 08:45:57.718909 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:45:57 crc kubenswrapper[4758]: E0130 08:45:57.719337 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:45:59.719285211 +0000 UTC m=+964.691596762 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.825846 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" event={"ID":"62b3bb0d-894a-4cb1-b644-d42f3cba98d7","Type":"ContainerStarted","Data":"0fdd92d57f884f0100f2b5de033300897f19a14dd12184eeb86260bd20480331"} Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.837436 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" event={"ID":"ac7c91ce-d4d9-4754-9828-a43140218228","Type":"ContainerStarted","Data":"0961b1b947fb3f57791dda7650e0e677d8544b69d666f6956c8c438d2c9e0fe5"} Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.841242 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" event={"ID":"1c4d1258-0416-49d0-a3a5-6ece70dc0c46","Type":"ContainerStarted","Data":"6e29cd568f358c1fd1119de90180229fef6f1b348f9d696794fcea6a06816677"} Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.844802 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" event={"ID":"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a","Type":"ContainerStarted","Data":"ba598062737afa2a71c6e3ac5e1c0a6f413a9d82323895c21031359b472b7202"} Jan 30 08:45:57 crc kubenswrapper[4758]: I0130 08:45:57.855433 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" event={"ID":"9579fc9d-6eae-4249-ac43-35144ed58bed","Type":"ContainerStarted","Data":"6e886e0a878e5cf5c9123899d27c4ba3453be9aae15a093fadd90af716497064"} Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.126389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.126542 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.126591 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:46:02.126576716 +0000 UTC m=+967.098888267 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.494504 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.539319 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj"] Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.584191 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb271df00_e9f2_4c58_94e7_22ea4b7d7eaf.slice/crio-24b8c2d5844a051ecc937a4521ba5fdac9821637df6089a6a3e48e1e6f2463fb WatchSource:0}: Error finding container 24b8c2d5844a051ecc937a4521ba5fdac9821637df6089a6a3e48e1e6f2463fb: Status 404 returned error can't find the container with id 24b8c2d5844a051ecc937a4521ba5fdac9821637df6089a6a3e48e1e6f2463fb Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.587319 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.587582 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.587658 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:46:02.587635466 +0000 UTC m=+967.559947017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.588581 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.657859 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.694531 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.707979 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.710943 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.729219 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.736362 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l"] Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.754927 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d614d3_1ced_4b27_bd91_8edd410e5fc5.slice/crio-0e59006c6224e43589d1a9cbccfb6da71ba01b50be97ccbec9ee7dddf5b1e346 WatchSource:0}: Error finding container 0e59006c6224e43589d1a9cbccfb6da71ba01b50be97ccbec9ee7dddf5b1e346: Status 404 returned error can't find the container with id 0e59006c6224e43589d1a9cbccfb6da71ba01b50be97ccbec9ee7dddf5b1e346 Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.766166 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b807f9_cd53_4189_b7df_09c5ea5fdf53.slice/crio-98fbee49745f55defc4e9962f77c83ab6ada33ce8782c2a12098b627e2e49772 WatchSource:0}: Error finding container 98fbee49745f55defc4e9962f77c83ab6ada33ce8782c2a12098b627e2e49772: Status 404 returned error can't find the container with id 98fbee49745f55defc4e9962f77c83ab6ada33ce8782c2a12098b627e2e49772 Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.783896 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b006f91_5b27_4342_935b_c7a7f174c03b.slice/crio-20344b8cd282e1e10fae1f7185ab6367587eccb84d8f66d736da8b9287695ec9 WatchSource:0}: Error finding container 20344b8cd282e1e10fae1f7185ab6367587eccb84d8f66d736da8b9287695ec9: Status 404 returned error can't find the container with id 20344b8cd282e1e10fae1f7185ab6367587eccb84d8f66d736da8b9287695ec9 Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.791358 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.800589 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.816122 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf"] Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.841738 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a4d0cd_ddb6_43d6_8f3e_457f519fb8c2.slice/crio-a21c68eccba47dc8a8e46d44068ca5ea2c0c1b4dcf970bc1523893bb59c68c57 WatchSource:0}: Error finding container a21c68eccba47dc8a8e46d44068ca5ea2c0c1b4dcf970bc1523893bb59c68c57: Status 404 returned error can't find the container with id a21c68eccba47dc8a8e46d44068ca5ea2c0c1b4dcf970bc1523893bb59c68c57 Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.855292 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k"] Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.883208 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grlxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-vzcpf_openstack-operators(178c1ff9-1a2a-4c4a-8258-89c267a5d0aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:45:58 crc kubenswrapper[4758]: E0130 08:45:58.885256 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" podUID="178c1ff9-1a2a-4c4a-8258-89c267a5d0aa" Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.936714 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc"] Jan 30 08:45:58 crc kubenswrapper[4758]: W0130 08:45:58.950479 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0565da5c_02e0_409f_b801_e06c3e79ef47.slice/crio-0b2491d7ce6f5ae8195ae8c138666de0f0b41f8306b58f33a03eb5957ed08f5d WatchSource:0}: Error finding container 0b2491d7ce6f5ae8195ae8c138666de0f0b41f8306b58f33a03eb5957ed08f5d: Status 404 returned error can't find the container with id 0b2491d7ce6f5ae8195ae8c138666de0f0b41f8306b58f33a03eb5957ed08f5d Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.982767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" event={"ID":"dc189df6-25bc-4d6e-aa30-05ce0db12721","Type":"ContainerStarted","Data":"c297ec26bf7669592da7b856a7716b8c62bee181f96f369221dff2619d7aa09a"} Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.988014 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv"] Jan 30 08:45:58 crc kubenswrapper[4758]: I0130 08:45:58.993916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" event={"ID":"03b807f9-cd53-4189-b7df-09c5ea5fdf53","Type":"ContainerStarted","Data":"98fbee49745f55defc4e9962f77c83ab6ada33ce8782c2a12098b627e2e49772"} Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.012738 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7cm6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5bf648c946-ngzmc_openstack-operators(73790ffa-61b1-489c-94c9-3934af94185f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.014102 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" podUID="73790ffa-61b1-489c-94c9-3934af94185f" Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.025173 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" event={"ID":"fbb26261-18aa-4ba0-940e-788200175600","Type":"ContainerStarted","Data":"f15d170dca7ce76872a233522052bacd75d776abb8543921bb4a8785471ae888"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.041265 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" event={"ID":"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2","Type":"ContainerStarted","Data":"00d427a01faba891dda70adc27c7d61bd017b46b366975d104740e8f99ec8197"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.068343 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" event={"ID":"e4787454-8070-449e-a7d0-2ff179eaaff3","Type":"ContainerStarted","Data":"ce945b9e7cf19b98a6f520a1bcd09926918ae3a3d4194168165a841ada57e5c9"} Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.068689 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v898l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-kdnzv_openstack-operators(a104527b-98dc-4120-91b5-6e7e9466b9a3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.069971 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.076205 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" event={"ID":"0b006f91-5b27-4342-935b-c7a7f174c03b","Type":"ContainerStarted","Data":"20344b8cd282e1e10fae1f7185ab6367587eccb84d8f66d736da8b9287695ec9"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.078508 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" event={"ID":"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf","Type":"ContainerStarted","Data":"24b8c2d5844a051ecc937a4521ba5fdac9821637df6089a6a3e48e1e6f2463fb"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.079655 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" event={"ID":"fe039ec9-aaec-4e17-8eac-c7719245ba4d","Type":"ContainerStarted","Data":"461e979a4a31f53298e8d50966345d9d6102907eab8b154933f452defcb0ed55"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.082938 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" event={"ID":"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7","Type":"ContainerStarted","Data":"b10de5108ad68e20062602ec056a480d36b3509d6405d60165cf1744982aa306"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.087249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" event={"ID":"b6d614d3-1ced-4b27-bd91-8edd410e5fc5","Type":"ContainerStarted","Data":"0e59006c6224e43589d1a9cbccfb6da71ba01b50be97ccbec9ee7dddf5b1e346"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.089577 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" event={"ID":"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2","Type":"ContainerStarted","Data":"a21c68eccba47dc8a8e46d44068ca5ea2c0c1b4dcf970bc1523893bb59c68c57"} Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.713387 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.713592 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.713826 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:46:03.713806198 +0000 UTC m=+968.686117749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:45:59 crc kubenswrapper[4758]: I0130 08:45:59.815920 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.817588 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:45:59 crc kubenswrapper[4758]: E0130 08:45:59.817650 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:46:03.81762452 +0000 UTC m=+968.789936071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:46:00 crc kubenswrapper[4758]: I0130 08:46:00.140147 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" event={"ID":"a104527b-98dc-4120-91b5-6e7e9466b9a3","Type":"ContainerStarted","Data":"a9aeeb357d350139aa29e70aed36ea3fd890af5d2177621196ea22fa505d08c4"} Jan 30 08:46:00 crc kubenswrapper[4758]: E0130 08:46:00.147298 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:46:00 crc kubenswrapper[4758]: I0130 08:46:00.156361 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" event={"ID":"0565da5c-02e0-409f-b801-e06c3e79ef47","Type":"ContainerStarted","Data":"0b2491d7ce6f5ae8195ae8c138666de0f0b41f8306b58f33a03eb5957ed08f5d"} Jan 30 08:46:00 crc kubenswrapper[4758]: I0130 08:46:00.172375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" event={"ID":"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa","Type":"ContainerStarted","Data":"ef30ab686332c6daf386b4a6edb9e0e5cef4a54bfc34ead8a2bd1cd5291aa835"} Jan 30 08:46:00 crc kubenswrapper[4758]: E0130 08:46:00.177604 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" podUID="178c1ff9-1a2a-4c4a-8258-89c267a5d0aa" Jan 30 08:46:00 crc kubenswrapper[4758]: I0130 08:46:00.180386 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" event={"ID":"73790ffa-61b1-489c-94c9-3934af94185f","Type":"ContainerStarted","Data":"b5abad725da64701824534cc8536be55dd296f9cb758cd1bab2bec06cb67aadf"} Jan 30 08:46:00 crc kubenswrapper[4758]: E0130 08:46:00.182894 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" podUID="73790ffa-61b1-489c-94c9-3934af94185f" Jan 30 08:46:01 crc kubenswrapper[4758]: E0130 08:46:01.197014 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:46:01 crc kubenswrapper[4758]: E0130 08:46:01.227957 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" podUID="73790ffa-61b1-489c-94c9-3934af94185f" Jan 30 08:46:01 crc kubenswrapper[4758]: E0130 08:46:01.228862 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" podUID="178c1ff9-1a2a-4c4a-8258-89c267a5d0aa" Jan 30 08:46:02 crc kubenswrapper[4758]: I0130 08:46:02.185552 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:02 crc kubenswrapper[4758]: E0130 08:46:02.185796 4758 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 08:46:02 crc kubenswrapper[4758]: E0130 08:46:02.185852 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert podName:ef31968c-db2e-4083-a08f-19a8daf0ac2d nodeName:}" failed. No retries permitted until 2026-01-30 08:46:10.185832954 +0000 UTC m=+975.158144505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert") pod "infra-operator-controller-manager-79955696d6-5k6df" (UID: "ef31968c-db2e-4083-a08f-19a8daf0ac2d") : secret "infra-operator-webhook-server-cert" not found Jan 30 08:46:02 crc kubenswrapper[4758]: I0130 08:46:02.591817 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:02 crc kubenswrapper[4758]: E0130 08:46:02.591960 4758 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:46:02 crc kubenswrapper[4758]: E0130 08:46:02.592006 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert podName:8cb0c6cc-e254-4dae-b433-397504fba6dc nodeName:}" failed. No retries permitted until 2026-01-30 08:46:10.591992795 +0000 UTC m=+975.564304346 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" (UID: "8cb0c6cc-e254-4dae-b433-397504fba6dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 08:46:03 crc kubenswrapper[4758]: I0130 08:46:03.715538 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:03 crc kubenswrapper[4758]: E0130 08:46:03.715709 4758 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 08:46:03 crc kubenswrapper[4758]: E0130 08:46:03.715920 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:46:11.715903546 +0000 UTC m=+976.688215097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "webhook-server-cert" not found Jan 30 08:46:03 crc kubenswrapper[4758]: I0130 08:46:03.918593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:03 crc kubenswrapper[4758]: E0130 08:46:03.918793 4758 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 08:46:03 crc kubenswrapper[4758]: E0130 08:46:03.918890 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs podName:0c09af13-f67b-4306-9039-f02d5f9e2f53 nodeName:}" failed. No retries permitted until 2026-01-30 08:46:11.918868282 +0000 UTC m=+976.891179863 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs") pod "openstack-operator-controller-manager-59cb5bfcb7-xtndr" (UID: "0c09af13-f67b-4306-9039-f02d5f9e2f53") : secret "metrics-server-cert" not found Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.231490 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.240174 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef31968c-db2e-4083-a08f-19a8daf0ac2d-cert\") pod \"infra-operator-controller-manager-79955696d6-5k6df\" (UID: \"ef31968c-db2e-4083-a08f-19a8daf0ac2d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.340912 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-mxwhp" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.350171 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.638164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.660020 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cb0c6cc-e254-4dae-b433-397504fba6dc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv\" (UID: \"8cb0c6cc-e254-4dae-b433-397504fba6dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.961683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mj5td" Jan 30 08:46:10 crc kubenswrapper[4758]: I0130 08:46:10.970111 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.755981 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.759816 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-webhook-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.960388 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.963958 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c09af13-f67b-4306-9039-f02d5f9e2f53-metrics-certs\") pod \"openstack-operator-controller-manager-59cb5bfcb7-xtndr\" (UID: \"0c09af13-f67b-4306-9039-f02d5f9e2f53\") " pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:11 crc kubenswrapper[4758]: I0130 08:46:11.995164 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rj47s" Jan 30 08:46:12 crc kubenswrapper[4758]: I0130 08:46:12.003923 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:14 crc kubenswrapper[4758]: E0130 08:46:14.404296 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 30 08:46:14 crc kubenswrapper[4758]: E0130 08:46:14.405078 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hlcl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-vjdn9_openstack-operators(b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:14 crc kubenswrapper[4758]: E0130 08:46:14.406307 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" podUID="b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.297849 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" podUID="b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.387844 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:9dadfafaf8e84cd2ba5f076b2a64261f781f5c357678237202acb1bee2688d25" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.388074 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:9dadfafaf8e84cd2ba5f076b2a64261f781f5c357678237202acb1bee2688d25,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kggjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-566c8844c5-6nj4p_openstack-operators(1c4d1258-0416-49d0-a3a5-6ece70dc0c46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.389226 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" podUID="1c4d1258-0416-49d0-a3a5-6ece70dc0c46" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.959568 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:9f790ab2e5cc7137dd72c7b6232acb6c6646e421c597fa14c2389e8d76ff6f27" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.959738 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:9f790ab2e5cc7137dd72c7b6232acb6c6646e421c597fa14c2389e8d76ff6f27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s97nc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-54985f5875-jct4c_openstack-operators(9579fc9d-6eae-4249-ac43-35144ed58bed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:15 crc kubenswrapper[4758]: E0130 08:46:15.960840 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" podUID="9579fc9d-6eae-4249-ac43-35144ed58bed" Jan 30 08:46:16 crc kubenswrapper[4758]: E0130 08:46:16.305243 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:9f790ab2e5cc7137dd72c7b6232acb6c6646e421c597fa14c2389e8d76ff6f27\\\"\"" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" podUID="9579fc9d-6eae-4249-ac43-35144ed58bed" Jan 30 08:46:16 crc kubenswrapper[4758]: E0130 08:46:16.305282 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:9dadfafaf8e84cd2ba5f076b2a64261f781f5c357678237202acb1bee2688d25\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" podUID="1c4d1258-0416-49d0-a3a5-6ece70dc0c46" Jan 30 08:46:16 crc kubenswrapper[4758]: I0130 08:46:16.897437 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:46:16 crc kubenswrapper[4758]: I0130 08:46:16.899716 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:16 crc kubenswrapper[4758]: I0130 08:46:16.906152 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.038936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.038999 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.039133 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.140744 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.140829 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.140870 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.141583 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.141637 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.184577 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") pod \"certified-operators-xjzxr\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:17 crc kubenswrapper[4758]: I0130 08:46:17.223887 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:20 crc kubenswrapper[4758]: E0130 08:46:20.879672 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:24a7033dccd09885beebba692a7951d5388284a36f285a97607971c10113354e" Jan 30 08:46:20 crc kubenswrapper[4758]: E0130 08:46:20.880204 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:24a7033dccd09885beebba692a7951d5388284a36f285a97607971c10113354e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qrlm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6cfc4f6754-7s6n2_openstack-operators(b6d614d3-1ced-4b27-bd91-8edd410e5fc5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:20 crc kubenswrapper[4758]: E0130 08:46:20.881508 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" podUID="b6d614d3-1ced-4b27-bd91-8edd410e5fc5" Jan 30 08:46:21 crc kubenswrapper[4758]: E0130 08:46:21.361842 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:24a7033dccd09885beebba692a7951d5388284a36f285a97607971c10113354e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" podUID="b6d614d3-1ced-4b27-bd91-8edd410e5fc5" Jan 30 08:46:21 crc kubenswrapper[4758]: E0130 08:46:21.743607 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f" Jan 30 08:46:21 crc kubenswrapper[4758]: E0130 08:46:21.743834 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qkvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cd99594-xhszj_openstack-operators(b271df00-e9f2-4c58-94e7-22ea4b7d7eaf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:21 crc kubenswrapper[4758]: E0130 08:46:21.745652 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" podUID="b271df00-e9f2-4c58-94e7-22ea4b7d7eaf" Jan 30 08:46:22 crc kubenswrapper[4758]: E0130 08:46:22.361787 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" podUID="b271df00-e9f2-4c58-94e7-22ea4b7d7eaf" Jan 30 08:46:22 crc kubenswrapper[4758]: E0130 08:46:22.435453 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 08:46:22 crc kubenswrapper[4758]: E0130 08:46:22.435641 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6k9sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-pmvv4_openstack-operators(fe039ec9-aaec-4e17-8eac-c7719245ba4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:22 crc kubenswrapper[4758]: E0130 08:46:22.436811 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" podUID="fe039ec9-aaec-4e17-8eac-c7719245ba4d" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.591635 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.592979 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.612224 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.736087 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.736147 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.736180 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837102 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837185 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837214 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837698 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.837849 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.872534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") pod \"redhat-marketplace-q5zts\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:22 crc kubenswrapper[4758]: I0130 08:46:22.913359 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:23 crc kubenswrapper[4758]: E0130 08:46:23.369425 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" podUID="fe039ec9-aaec-4e17-8eac-c7719245ba4d" Jan 30 08:46:24 crc kubenswrapper[4758]: E0130 08:46:24.920613 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 08:46:24 crc kubenswrapper[4758]: E0130 08:46:24.920830 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqhjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-gt4n2_openstack-operators(fbb26261-18aa-4ba0-940e-788200175600): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:24 crc kubenswrapper[4758]: E0130 08:46:24.921991 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" podUID="fbb26261-18aa-4ba0-940e-788200175600" Jan 30 08:46:25 crc kubenswrapper[4758]: E0130 08:46:25.391062 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" podUID="fbb26261-18aa-4ba0-940e-788200175600" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.199554 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:dc0be288bd4f98a1e80d21cb9e9a12381b33f9e4d6ecda0f46ca076587660144" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.200087 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:dc0be288bd4f98a1e80d21cb9e9a12381b33f9e4d6ecda0f46ca076587660144,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tfdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c9d56f9bd-h89d9_openstack-operators(b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.201697 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" podUID="b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.392824 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:dc0be288bd4f98a1e80d21cb9e9a12381b33f9e4d6ecda0f46ca076587660144\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" podUID="b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.797768 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.797944 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v898l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-kdnzv_openstack-operators(a104527b-98dc-4120-91b5-6e7e9466b9a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:26 crc kubenswrapper[4758]: E0130 08:46:26.799697 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:46:27 crc kubenswrapper[4758]: E0130 08:46:27.464585 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:9d4490922c772ceca4b86d2a78e5cc6cd7198099dc637cc6c10428fc9c4e15fb" Jan 30 08:46:27 crc kubenswrapper[4758]: E0130 08:46:27.464773 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:9d4490922c772ceca4b86d2a78e5cc6cd7198099dc637cc6c10428fc9c4e15fb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wsjts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-67f5956bc9-hp2mv_openstack-operators(0b006f91-5b27-4342-935b-c7a7f174c03b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:46:27 crc kubenswrapper[4758]: E0130 08:46:27.466079 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" podUID="0b006f91-5b27-4342-935b-c7a7f174c03b" Jan 30 08:46:28 crc kubenswrapper[4758]: E0130 08:46:28.407478 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:9d4490922c772ceca4b86d2a78e5cc6cd7198099dc637cc6c10428fc9c4e15fb\\\"\"" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" podUID="0b006f91-5b27-4342-935b-c7a7f174c03b" Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.637489 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-5k6df"] Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.667364 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr"] Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.817284 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv"] Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.858787 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:46:29 crc kubenswrapper[4758]: W0130 08:46:29.864886 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cb0c6cc_e254_4dae_b433_397504fba6dc.slice/crio-7bbcb0d9f086483ec9f437011820b6a78845de1fcd57eac9994dfa5a151dd525 WatchSource:0}: Error finding container 7bbcb0d9f086483ec9f437011820b6a78845de1fcd57eac9994dfa5a151dd525: Status 404 returned error can't find the container with id 7bbcb0d9f086483ec9f437011820b6a78845de1fcd57eac9994dfa5a151dd525 Jan 30 08:46:29 crc kubenswrapper[4758]: W0130 08:46:29.878943 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b8851e8_8f4f_4f19_9aa6_e4a05f90d2b1.slice/crio-51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3 WatchSource:0}: Error finding container 51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3: Status 404 returned error can't find the container with id 51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3 Jan 30 08:46:29 crc kubenswrapper[4758]: I0130 08:46:29.883871 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.425715 4758 generic.go:334] "Generic (PLEG): container finished" podID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerID="685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1" exitCode=0 Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.426979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerDied","Data":"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.427105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerStarted","Data":"51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.428821 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" event={"ID":"ef31968c-db2e-4083-a08f-19a8daf0ac2d","Type":"ContainerStarted","Data":"e0c8a780eb16297074592133943f9c0d2461557b09ff532c6a9a07d7c1f80791"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.429823 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" event={"ID":"8cb0c6cc-e254-4dae-b433-397504fba6dc","Type":"ContainerStarted","Data":"7bbcb0d9f086483ec9f437011820b6a78845de1fcd57eac9994dfa5a151dd525"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.432583 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" event={"ID":"dc189df6-25bc-4d6e-aa30-05ce0db12721","Type":"ContainerStarted","Data":"a834dcccf839a2e389dff65725635cf90843ee6fde5d604219fae8033fb6fcd8"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.432739 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.462001 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" event={"ID":"9579fc9d-6eae-4249-ac43-35144ed58bed","Type":"ContainerStarted","Data":"3ac6bc03858c68cba296e5f5a016ca273cabf61e6dc36123c051d9e9705dda00"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.462788 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.481582 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" event={"ID":"fe68673c-8979-46ee-a4aa-f95bcd7b4e8a","Type":"ContainerStarted","Data":"6763991b335690dd7d87a5a123235b36641a52f2dd0986acf59ad0e1dcf7ee26"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.482184 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.501025 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" event={"ID":"62b3bb0d-894a-4cb1-b644-d42f3cba98d7","Type":"ContainerStarted","Data":"811708b02ebe609442569f06856770f1a7a24efd51c9c87c3ec32f1c033e0df8"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.501698 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.503235 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" event={"ID":"e4787454-8070-449e-a7d0-2ff179eaaff3","Type":"ContainerStarted","Data":"07c48a650cb7e17b28e28a01ce1a2403cdccc5d5afa7c3928dfe70599bfdf436"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.503920 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.512473 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerStarted","Data":"06b17b09dd9e50fb149bd1b02970b5fb7919445a9f9ff520c7633494ad9262cf"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.518307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" event={"ID":"ac7c91ce-d4d9-4754-9828-a43140218228","Type":"ContainerStarted","Data":"a4dacef9d0c5d1ca65f297dd9aca2f8ded0efe7810aac8eaae55f99fd872713c"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.518883 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.520468 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" event={"ID":"73790ffa-61b1-489c-94c9-3934af94185f","Type":"ContainerStarted","Data":"060b6190b391d3558107a66066e926283193f48bbcdee5b03cf4f60d68be870d"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.520906 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.545360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" event={"ID":"03b807f9-cd53-4189-b7df-09c5ea5fdf53","Type":"ContainerStarted","Data":"77711f65b64b85c6c3f9da2f552ec13b6a2bd98ebd9722cc6eda337d023d8e8a"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.545457 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.569322 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" event={"ID":"b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2","Type":"ContainerStarted","Data":"d0677b060d86da099904bc16c710292f1bae285b58ebeb597612e0e91771a31b"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.569598 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.575785 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" event={"ID":"178c1ff9-1a2a-4c4a-8258-89c267a5d0aa","Type":"ContainerStarted","Data":"4858a003abaa445853852b12a4beff866c838e8cdbab70ab9203f184e3dc584b"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.577450 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.605419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" event={"ID":"5c2a7d2b-62a1-468b-a3b3-fe77698a41a2","Type":"ContainerStarted","Data":"5baa82b6bf9770e4058731014c86691d09ef445e4a23f53f3d51bf299176aed9"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.605821 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.612718 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" podStartSLOduration=8.46715709 podStartE2EDuration="37.612696714s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:56.940899059 +0000 UTC m=+961.913210600" lastFinishedPulling="2026-01-30 08:46:26.086438673 +0000 UTC m=+991.058750224" observedRunningTime="2026-01-30 08:46:30.609420771 +0000 UTC m=+995.581732332" watchObservedRunningTime="2026-01-30 08:46:30.612696714 +0000 UTC m=+995.585008265" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.625197 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" event={"ID":"0565da5c-02e0-409f-b801-e06c3e79ef47","Type":"ContainerStarted","Data":"2501cb20d7633502732aff0c05742718c8383f08b48477861506d3c1e3788b57"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.625838 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.642697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" event={"ID":"0c09af13-f67b-4306-9039-f02d5f9e2f53","Type":"ContainerStarted","Data":"53cce0fe3356f21165a99bfb70cce6de3aa0a96d96e964a68c4f68d5dd64e058"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.642738 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" event={"ID":"0c09af13-f67b-4306-9039-f02d5f9e2f53","Type":"ContainerStarted","Data":"b00cce98f9f7816272307e7964dd7bc6f74020439f5711cd2931ab8befa13e1a"} Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.643460 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.674347 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" podStartSLOduration=9.724193386 podStartE2EDuration="37.674325336s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:57.044333829 +0000 UTC m=+962.016645380" lastFinishedPulling="2026-01-30 08:46:24.994465779 +0000 UTC m=+989.966777330" observedRunningTime="2026-01-30 08:46:30.669602598 +0000 UTC m=+995.641914149" watchObservedRunningTime="2026-01-30 08:46:30.674325336 +0000 UTC m=+995.646636887" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.700594 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" podStartSLOduration=5.88351199 podStartE2EDuration="37.700574469s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:57.548210359 +0000 UTC m=+962.520521910" lastFinishedPulling="2026-01-30 08:46:29.365272838 +0000 UTC m=+994.337584389" observedRunningTime="2026-01-30 08:46:30.696281784 +0000 UTC m=+995.668593355" watchObservedRunningTime="2026-01-30 08:46:30.700574469 +0000 UTC m=+995.672886020" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.731813 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" podStartSLOduration=10.295652887 podStartE2EDuration="37.731792658s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.651157849 +0000 UTC m=+963.623469400" lastFinishedPulling="2026-01-30 08:46:26.08729762 +0000 UTC m=+991.059609171" observedRunningTime="2026-01-30 08:46:30.730525197 +0000 UTC m=+995.702836748" watchObservedRunningTime="2026-01-30 08:46:30.731792658 +0000 UTC m=+995.704104209" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.834411 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" podStartSLOduration=8.208439844 podStartE2EDuration="36.834396843s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.786730971 +0000 UTC m=+963.759042522" lastFinishedPulling="2026-01-30 08:46:27.41268797 +0000 UTC m=+992.384999521" observedRunningTime="2026-01-30 08:46:30.763131589 +0000 UTC m=+995.735443150" watchObservedRunningTime="2026-01-30 08:46:30.834396843 +0000 UTC m=+995.806708384" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.836776 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" podStartSLOduration=8.422750347000001 podStartE2EDuration="36.836763057s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:57.671788793 +0000 UTC m=+962.644100344" lastFinishedPulling="2026-01-30 08:46:26.085801503 +0000 UTC m=+991.058113054" observedRunningTime="2026-01-30 08:46:30.832563896 +0000 UTC m=+995.804875457" watchObservedRunningTime="2026-01-30 08:46:30.836763057 +0000 UTC m=+995.809074608" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.881958 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" podStartSLOduration=7.643156377 podStartE2EDuration="36.881944273s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.787071282 +0000 UTC m=+963.759382833" lastFinishedPulling="2026-01-30 08:46:28.025859178 +0000 UTC m=+992.998170729" observedRunningTime="2026-01-30 08:46:30.880647492 +0000 UTC m=+995.852959033" watchObservedRunningTime="2026-01-30 08:46:30.881944273 +0000 UTC m=+995.854255824" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.948897 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" podStartSLOduration=7.232675782 podStartE2EDuration="37.948876131s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.858147042 +0000 UTC m=+963.830458593" lastFinishedPulling="2026-01-30 08:46:29.574347391 +0000 UTC m=+994.546658942" observedRunningTime="2026-01-30 08:46:30.926412487 +0000 UTC m=+995.898724038" watchObservedRunningTime="2026-01-30 08:46:30.948876131 +0000 UTC m=+995.921187692" Jan 30 08:46:30 crc kubenswrapper[4758]: I0130 08:46:30.956585 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" podStartSLOduration=7.925284555 podStartE2EDuration="36.956542681s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.993866429 +0000 UTC m=+963.966177980" lastFinishedPulling="2026-01-30 08:46:28.025124555 +0000 UTC m=+992.997436106" observedRunningTime="2026-01-30 08:46:30.947932451 +0000 UTC m=+995.920244002" watchObservedRunningTime="2026-01-30 08:46:30.956542681 +0000 UTC m=+995.928854232" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.100478 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" podStartSLOduration=9.723822475 podStartE2EDuration="37.100455752s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.710471487 +0000 UTC m=+963.682783038" lastFinishedPulling="2026-01-30 08:46:26.087104764 +0000 UTC m=+991.059416315" observedRunningTime="2026-01-30 08:46:30.995067588 +0000 UTC m=+995.967379139" watchObservedRunningTime="2026-01-30 08:46:31.100455752 +0000 UTC m=+996.072767303" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.201930 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" podStartSLOduration=6.986481922 podStartE2EDuration="37.201909842s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.883070447 +0000 UTC m=+963.855381998" lastFinishedPulling="2026-01-30 08:46:29.098498367 +0000 UTC m=+994.070809918" observedRunningTime="2026-01-30 08:46:31.184541777 +0000 UTC m=+996.156853348" watchObservedRunningTime="2026-01-30 08:46:31.201909842 +0000 UTC m=+996.174221403" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.206486 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" podStartSLOduration=7.120478975 podStartE2EDuration="37.206460694s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:59.012564009 +0000 UTC m=+963.984875560" lastFinishedPulling="2026-01-30 08:46:29.098545728 +0000 UTC m=+994.070857279" observedRunningTime="2026-01-30 08:46:31.109827595 +0000 UTC m=+996.082139156" watchObservedRunningTime="2026-01-30 08:46:31.206460694 +0000 UTC m=+996.178772245" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.331811 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" podStartSLOduration=36.331794132 podStartE2EDuration="36.331794132s" podCreationTimestamp="2026-01-30 08:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:46:31.330495722 +0000 UTC m=+996.302807273" watchObservedRunningTime="2026-01-30 08:46:31.331794132 +0000 UTC m=+996.304105683" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.664627 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" event={"ID":"1c4d1258-0416-49d0-a3a5-6ece70dc0c46","Type":"ContainerStarted","Data":"20d409eb98ad29768d87a70ccbfb5df39d35cc82bca621033b8438a2c0078488"} Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.664848 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.699381 4758 generic.go:334] "Generic (PLEG): container finished" podID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerID="8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38" exitCode=0 Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.699467 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerDied","Data":"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38"} Jan 30 08:46:31 crc kubenswrapper[4758]: I0130 08:46:31.732464 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" podStartSLOduration=5.302909288 podStartE2EDuration="38.73244604s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:45:56.829837838 +0000 UTC m=+961.802149389" lastFinishedPulling="2026-01-30 08:46:30.25937459 +0000 UTC m=+995.231686141" observedRunningTime="2026-01-30 08:46:31.731407387 +0000 UTC m=+996.703718938" watchObservedRunningTime="2026-01-30 08:46:31.73244604 +0000 UTC m=+996.704757591" Jan 30 08:46:32 crc kubenswrapper[4758]: I0130 08:46:32.708690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerStarted","Data":"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638"} Jan 30 08:46:33 crc kubenswrapper[4758]: I0130 08:46:33.716540 4758 generic.go:334] "Generic (PLEG): container finished" podID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerID="a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638" exitCode=0 Jan 30 08:46:33 crc kubenswrapper[4758]: I0130 08:46:33.717408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerDied","Data":"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638"} Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.174115 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-6cgsq" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.260568 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-sw42x" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.328966 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-54985f5875-jct4c" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.486275 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-4jmpb" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.527750 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-cp9km" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.636018 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-c5d9k" Jan 30 08:46:34 crc kubenswrapper[4758]: I0130 08:46:34.680476 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-kxmjn" Jan 30 08:46:35 crc kubenswrapper[4758]: I0130 08:46:35.209892 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vzcpf" Jan 30 08:46:35 crc kubenswrapper[4758]: I0130 08:46:35.382787 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qlc8l" Jan 30 08:46:35 crc kubenswrapper[4758]: I0130 08:46:35.488282 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-t7dht" Jan 30 08:46:36 crc kubenswrapper[4758]: I0130 08:46:36.094751 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-ngzmc" Jan 30 08:46:38 crc kubenswrapper[4758]: E0130 08:46:38.470472 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podUID="a104527b-98dc-4120-91b5-6e7e9466b9a3" Jan 30 08:46:42 crc kubenswrapper[4758]: I0130 08:46:42.011549 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-59cb5bfcb7-xtndr" Jan 30 08:46:43 crc kubenswrapper[4758]: I0130 08:46:43.797594 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerStarted","Data":"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.196986 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-6nj4p" Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.662661 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vjdn9" Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.805959 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" event={"ID":"b271df00-e9f2-4c58-94e7-22ea4b7d7eaf","Type":"ContainerStarted","Data":"300df300d3d68287507777e7475b9d04ada4f9d37a216a5c794236391b9a198d"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.807339 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" event={"ID":"fe039ec9-aaec-4e17-8eac-c7719245ba4d","Type":"ContainerStarted","Data":"be921b74bca7a395d47acbca6f0d1a333e21674335c6b5556b88a26021f4f862"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.808644 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" event={"ID":"b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7","Type":"ContainerStarted","Data":"0ae710e7e2e9623131973cc1ef57ade1ea4d1e3d16d7d20ca9c2b94318304b7a"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.809799 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" event={"ID":"b6d614d3-1ced-4b27-bd91-8edd410e5fc5","Type":"ContainerStarted","Data":"d6f55fb4a8361cb03deac0797568f331485c09144eeb3009e75206e95cd798d6"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.811007 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" event={"ID":"8cb0c6cc-e254-4dae-b433-397504fba6dc","Type":"ContainerStarted","Data":"8190e99eed5f03b70a61667ef7966e70f00d9f4efd86bd2d8001f6853d718716"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.812929 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" event={"ID":"fbb26261-18aa-4ba0-940e-788200175600","Type":"ContainerStarted","Data":"d6625a3b44a01f6adfe8410a8890e5a95dd31fc3f2ecd051eb91974e89c92f73"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.817435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerStarted","Data":"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.818830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" event={"ID":"0b006f91-5b27-4342-935b-c7a7f174c03b","Type":"ContainerStarted","Data":"2f4909e872d800e52e0b5bb2613b7dd70ad6a3c4fac96649091b99aade2ceefd"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.819046 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.820276 4758 generic.go:334] "Generic (PLEG): container finished" podID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerID="ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb" exitCode=0 Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.820340 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerDied","Data":"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.821887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" event={"ID":"ef31968c-db2e-4083-a08f-19a8daf0ac2d","Type":"ContainerStarted","Data":"58625f156738e4961025087556129785eae41d8c2585e63c9c53d762b719e359"} Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.898344 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gt4n2" podStartSLOduration=4.604977741 podStartE2EDuration="48.898325444s" podCreationTimestamp="2026-01-30 08:45:56 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.858457052 +0000 UTC m=+963.830768603" lastFinishedPulling="2026-01-30 08:46:43.151804755 +0000 UTC m=+1008.124116306" observedRunningTime="2026-01-30 08:46:44.848659548 +0000 UTC m=+1009.820971099" watchObservedRunningTime="2026-01-30 08:46:44.898325444 +0000 UTC m=+1009.870636995" Jan 30 08:46:44 crc kubenswrapper[4758]: I0130 08:46:44.964933 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" podStartSLOduration=6.205774497 podStartE2EDuration="50.96490931s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.795820377 +0000 UTC m=+963.768131938" lastFinishedPulling="2026-01-30 08:46:43.5549552 +0000 UTC m=+1008.527266751" observedRunningTime="2026-01-30 08:46:44.956754865 +0000 UTC m=+1009.929066416" watchObservedRunningTime="2026-01-30 08:46:44.96490931 +0000 UTC m=+1009.937221011" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.828987 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.829030 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.829062 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.848527 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" podStartSLOduration=7.060254964 podStartE2EDuration="51.848509195s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.769457866 +0000 UTC m=+963.741769417" lastFinishedPulling="2026-01-30 08:46:43.557712097 +0000 UTC m=+1008.530023648" observedRunningTime="2026-01-30 08:46:45.843012143 +0000 UTC m=+1010.815323694" watchObservedRunningTime="2026-01-30 08:46:45.848509195 +0000 UTC m=+1010.820820746" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.859999 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" podStartSLOduration=7.053920662 podStartE2EDuration="51.859979254s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.749815907 +0000 UTC m=+963.722127458" lastFinishedPulling="2026-01-30 08:46:43.555874499 +0000 UTC m=+1008.528186050" observedRunningTime="2026-01-30 08:46:45.857419044 +0000 UTC m=+1010.829730595" watchObservedRunningTime="2026-01-30 08:46:45.859979254 +0000 UTC m=+1010.832290805" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.877756 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" podStartSLOduration=7.254327334 podStartE2EDuration="51.87773875s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.529813194 +0000 UTC m=+963.502124745" lastFinishedPulling="2026-01-30 08:46:43.15322461 +0000 UTC m=+1008.125536161" observedRunningTime="2026-01-30 08:46:45.871075652 +0000 UTC m=+1010.843387233" watchObservedRunningTime="2026-01-30 08:46:45.87773875 +0000 UTC m=+1010.850050301" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.891347 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q5zts" podStartSLOduration=11.278445114 podStartE2EDuration="23.891325897s" podCreationTimestamp="2026-01-30 08:46:22 +0000 UTC" firstStartedPulling="2026-01-30 08:46:30.42782208 +0000 UTC m=+995.400133631" lastFinishedPulling="2026-01-30 08:46:43.040702863 +0000 UTC m=+1008.013014414" observedRunningTime="2026-01-30 08:46:45.88602343 +0000 UTC m=+1010.858334981" watchObservedRunningTime="2026-01-30 08:46:45.891325897 +0000 UTC m=+1010.863637448" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.903827 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" podStartSLOduration=6.946456128 podStartE2EDuration="51.903803938s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:58.595385501 +0000 UTC m=+963.567697052" lastFinishedPulling="2026-01-30 08:46:43.552733311 +0000 UTC m=+1008.525044862" observedRunningTime="2026-01-30 08:46:45.897749498 +0000 UTC m=+1010.870061049" watchObservedRunningTime="2026-01-30 08:46:45.903803938 +0000 UTC m=+1010.876115499" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.944714 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" podStartSLOduration=38.678650647 podStartE2EDuration="51.94469723s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:46:29.885429401 +0000 UTC m=+994.857740952" lastFinishedPulling="2026-01-30 08:46:43.151475984 +0000 UTC m=+1008.123787535" observedRunningTime="2026-01-30 08:46:45.941135148 +0000 UTC m=+1010.913446709" watchObservedRunningTime="2026-01-30 08:46:45.94469723 +0000 UTC m=+1010.917008771" Jan 30 08:46:45 crc kubenswrapper[4758]: I0130 08:46:45.987474 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" podStartSLOduration=39.509481863 podStartE2EDuration="52.987450679s" podCreationTimestamp="2026-01-30 08:45:53 +0000 UTC" firstStartedPulling="2026-01-30 08:46:29.665311091 +0000 UTC m=+994.637622642" lastFinishedPulling="2026-01-30 08:46:43.143279917 +0000 UTC m=+1008.115591458" observedRunningTime="2026-01-30 08:46:45.974930227 +0000 UTC m=+1010.947241778" watchObservedRunningTime="2026-01-30 08:46:45.987450679 +0000 UTC m=+1010.959762230" Jan 30 08:46:49 crc kubenswrapper[4758]: I0130 08:46:49.851822 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerStarted","Data":"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f"} Jan 30 08:46:49 crc kubenswrapper[4758]: I0130 08:46:49.880760 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xjzxr" podStartSLOduration=16.298943186 podStartE2EDuration="33.880742283s" podCreationTimestamp="2026-01-30 08:46:16 +0000 UTC" firstStartedPulling="2026-01-30 08:46:31.703187353 +0000 UTC m=+996.675498914" lastFinishedPulling="2026-01-30 08:46:49.28498646 +0000 UTC m=+1014.257298011" observedRunningTime="2026-01-30 08:46:49.873485545 +0000 UTC m=+1014.845797136" watchObservedRunningTime="2026-01-30 08:46:49.880742283 +0000 UTC m=+1014.853053834" Jan 30 08:46:50 crc kubenswrapper[4758]: I0130 08:46:50.357196 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-5k6df" Jan 30 08:46:50 crc kubenswrapper[4758]: I0130 08:46:50.970820 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:50 crc kubenswrapper[4758]: I0130 08:46:50.978924 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv" Jan 30 08:46:51 crc kubenswrapper[4758]: I0130 08:46:51.770598 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.871181 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" event={"ID":"a104527b-98dc-4120-91b5-6e7e9466b9a3","Type":"ContainerStarted","Data":"36854ea4648a5d8015b7277aad0654738565315ee7dd99ca4357cc4ef760750b"} Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.871748 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.892002 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" podStartSLOduration=5.505465937 podStartE2EDuration="58.891982401s" podCreationTimestamp="2026-01-30 08:45:54 +0000 UTC" firstStartedPulling="2026-01-30 08:45:59.068577504 +0000 UTC m=+964.040889055" lastFinishedPulling="2026-01-30 08:46:52.455093968 +0000 UTC m=+1017.427405519" observedRunningTime="2026-01-30 08:46:52.884979761 +0000 UTC m=+1017.857291322" watchObservedRunningTime="2026-01-30 08:46:52.891982401 +0000 UTC m=+1017.864293952" Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.913777 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:52 crc kubenswrapper[4758]: I0130 08:46:52.913827 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:46:53 crc kubenswrapper[4758]: I0130 08:46:53.956183 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-q5zts" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" probeResult="failure" output=< Jan 30 08:46:53 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:46:53 crc kubenswrapper[4758]: > Jan 30 08:46:54 crc kubenswrapper[4758]: I0130 08:46:54.611802 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-h89d9" Jan 30 08:46:54 crc kubenswrapper[4758]: I0130 08:46:54.709435 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-7s6n2" Jan 30 08:46:54 crc kubenswrapper[4758]: I0130 08:46:54.811259 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-hp2mv" Jan 30 08:46:55 crc kubenswrapper[4758]: I0130 08:46:55.284714 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:46:55 crc kubenswrapper[4758]: I0130 08:46:55.287406 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-xhszj" Jan 30 08:46:55 crc kubenswrapper[4758]: I0130 08:46:55.336104 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:46:55 crc kubenswrapper[4758]: I0130 08:46:55.339142 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pmvv4" Jan 30 08:46:57 crc kubenswrapper[4758]: I0130 08:46:57.225081 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:57 crc kubenswrapper[4758]: I0130 08:46:57.226174 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:57 crc kubenswrapper[4758]: I0130 08:46:57.267738 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:57 crc kubenswrapper[4758]: I0130 08:46:57.967781 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:46:58 crc kubenswrapper[4758]: I0130 08:46:58.016004 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:46:59 crc kubenswrapper[4758]: I0130 08:46:59.914149 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xjzxr" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="registry-server" containerID="cri-o://3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" gracePeriod=2 Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.279520 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.441995 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") pod \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.442357 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") pod \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.442486 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") pod \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\" (UID: \"bcb47b30-3297-48ef-b045-92a3bfa3ade9\") " Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.443000 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities" (OuterVolumeSpecName: "utilities") pod "bcb47b30-3297-48ef-b045-92a3bfa3ade9" (UID: "bcb47b30-3297-48ef-b045-92a3bfa3ade9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.449493 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4" (OuterVolumeSpecName: "kube-api-access-bk9f4") pod "bcb47b30-3297-48ef-b045-92a3bfa3ade9" (UID: "bcb47b30-3297-48ef-b045-92a3bfa3ade9"). InnerVolumeSpecName "kube-api-access-bk9f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.491942 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb47b30-3297-48ef-b045-92a3bfa3ade9" (UID: "bcb47b30-3297-48ef-b045-92a3bfa3ade9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.544282 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.544323 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb47b30-3297-48ef-b045-92a3bfa3ade9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.544334 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk9f4\" (UniqueName: \"kubernetes.io/projected/bcb47b30-3297-48ef-b045-92a3bfa3ade9-kube-api-access-bk9f4\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.922340 4758 generic.go:334] "Generic (PLEG): container finished" podID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerID="3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" exitCode=0 Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.922388 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzxr" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.922409 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerDied","Data":"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f"} Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.923718 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzxr" event={"ID":"bcb47b30-3297-48ef-b045-92a3bfa3ade9","Type":"ContainerDied","Data":"06b17b09dd9e50fb149bd1b02970b5fb7919445a9f9ff520c7633494ad9262cf"} Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.923744 4758 scope.go:117] "RemoveContainer" containerID="3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.943040 4758 scope.go:117] "RemoveContainer" containerID="ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.957344 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.969277 4758 scope.go:117] "RemoveContainer" containerID="8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.977293 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xjzxr"] Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.988281 4758 scope.go:117] "RemoveContainer" containerID="3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" Jan 30 08:47:00 crc kubenswrapper[4758]: E0130 08:47:00.988993 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f\": container with ID starting with 3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f not found: ID does not exist" containerID="3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.989069 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f"} err="failed to get container status \"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f\": rpc error: code = NotFound desc = could not find container \"3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f\": container with ID starting with 3cabaf022657ebd2f2e085160bde4007a5b8fc1f0801395c6cd5628030f8ac1f not found: ID does not exist" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.989100 4758 scope.go:117] "RemoveContainer" containerID="ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb" Jan 30 08:47:00 crc kubenswrapper[4758]: E0130 08:47:00.989632 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb\": container with ID starting with ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb not found: ID does not exist" containerID="ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.989670 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb"} err="failed to get container status \"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb\": rpc error: code = NotFound desc = could not find container \"ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb\": container with ID starting with ca7deed823faa4712b01753debae5bcf9357a4b3135557d205dcb422b4eba5cb not found: ID does not exist" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.989698 4758 scope.go:117] "RemoveContainer" containerID="8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38" Jan 30 08:47:00 crc kubenswrapper[4758]: E0130 08:47:00.989979 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38\": container with ID starting with 8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38 not found: ID does not exist" containerID="8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38" Jan 30 08:47:00 crc kubenswrapper[4758]: I0130 08:47:00.990008 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38"} err="failed to get container status \"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38\": rpc error: code = NotFound desc = could not find container \"8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38\": container with ID starting with 8cffdf97d7033eafa316cb7524921a7fea4d1d9a78c9ffcdda69641e4dfd8f38 not found: ID does not exist" Jan 30 08:47:01 crc kubenswrapper[4758]: I0130 08:47:01.778259 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" path="/var/lib/kubelet/pods/bcb47b30-3297-48ef-b045-92a3bfa3ade9/volumes" Jan 30 08:47:02 crc kubenswrapper[4758]: I0130 08:47:02.962123 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:47:03 crc kubenswrapper[4758]: I0130 08:47:02.999950 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:47:03 crc kubenswrapper[4758]: I0130 08:47:03.904579 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:47:04 crc kubenswrapper[4758]: I0130 08:47:04.948777 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q5zts" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" containerID="cri-o://b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" gracePeriod=2 Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.386153 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.510485 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") pod \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.510534 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") pod \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.510662 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") pod \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\" (UID: \"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1\") " Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.511519 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities" (OuterVolumeSpecName: "utilities") pod "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" (UID: "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.516454 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2" (OuterVolumeSpecName: "kube-api-access-ggxq2") pod "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" (UID: "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1"). InnerVolumeSpecName "kube-api-access-ggxq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.539661 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" (UID: "3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.612476 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.612542 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.612553 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggxq2\" (UniqueName: \"kubernetes.io/projected/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1-kube-api-access-ggxq2\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.697156 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-kdnzv" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.959907 4758 generic.go:334] "Generic (PLEG): container finished" podID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerID="b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" exitCode=0 Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.960126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerDied","Data":"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c"} Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.960546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5zts" event={"ID":"3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1","Type":"ContainerDied","Data":"51348cfa0c0e6ca90b1d98c0ff7786082c45a4de6c44a4f8954078597967a5f3"} Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.960638 4758 scope.go:117] "RemoveContainer" containerID="b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.960182 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5zts" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.976980 4758 scope.go:117] "RemoveContainer" containerID="a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638" Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.984637 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:47:05 crc kubenswrapper[4758]: I0130 08:47:05.991665 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5zts"] Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.010282 4758 scope.go:117] "RemoveContainer" containerID="685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.027305 4758 scope.go:117] "RemoveContainer" containerID="b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" Jan 30 08:47:06 crc kubenswrapper[4758]: E0130 08:47:06.027735 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c\": container with ID starting with b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c not found: ID does not exist" containerID="b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.027831 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c"} err="failed to get container status \"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c\": rpc error: code = NotFound desc = could not find container \"b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c\": container with ID starting with b37533786677045c207ddb9b35745a143bee2e1a848deb98e3fe49574f98d81c not found: ID does not exist" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.027942 4758 scope.go:117] "RemoveContainer" containerID="a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638" Jan 30 08:47:06 crc kubenswrapper[4758]: E0130 08:47:06.028227 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638\": container with ID starting with a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638 not found: ID does not exist" containerID="a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.028320 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638"} err="failed to get container status \"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638\": rpc error: code = NotFound desc = could not find container \"a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638\": container with ID starting with a98c1617a48d5a33d9ef2a9926b07737705de337531c1b8c5684e16410912638 not found: ID does not exist" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.028402 4758 scope.go:117] "RemoveContainer" containerID="685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1" Jan 30 08:47:06 crc kubenswrapper[4758]: E0130 08:47:06.028661 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1\": container with ID starting with 685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1 not found: ID does not exist" containerID="685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1" Jan 30 08:47:06 crc kubenswrapper[4758]: I0130 08:47:06.028754 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1"} err="failed to get container status \"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1\": rpc error: code = NotFound desc = could not find container \"685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1\": container with ID starting with 685e8f7eb9235f893d9ed6332df24af25d607f11bed36e9a52202c2b1af781e1 not found: ID does not exist" Jan 30 08:47:07 crc kubenswrapper[4758]: I0130 08:47:07.776321 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" path="/var/lib/kubelet/pods/3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1/volumes" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.250641 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257175 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="extract-content" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257216 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="extract-content" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257232 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="extract-utilities" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257240 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="extract-utilities" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257250 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257257 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257279 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257287 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257299 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="extract-content" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257306 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="extract-content" Jan 30 08:47:23 crc kubenswrapper[4758]: E0130 08:47:23.257322 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="extract-utilities" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257330 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="extract-utilities" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257572 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb47b30-3297-48ef-b045-92a3bfa3ade9" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.257589 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8851e8-8f4f-4f19-9aa6-e4a05f90d2b1" containerName="registry-server" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.258537 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.262649 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.262685 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.262696 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.262878 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6tkgt" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.278524 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.334175 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.335246 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.339708 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.348515 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.348559 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.348969 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.349030 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.349120 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.356412 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450251 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450321 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450352 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450428 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.450463 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.451418 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.451432 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.451491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.469917 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") pod \"dnsmasq-dns-78dd6ddcc-xxlqj\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.483898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") pod \"dnsmasq-dns-675f4bcbfc-8vbmk\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.574465 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:23 crc kubenswrapper[4758]: I0130 08:47:23.649769 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:24 crc kubenswrapper[4758]: I0130 08:47:24.032326 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:24 crc kubenswrapper[4758]: I0130 08:47:24.075151 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" event={"ID":"76b978eb-b181-4644-890e-e990cd2ba269","Type":"ContainerStarted","Data":"665d5ba8bc1f090d8cef06dca81382042d1531fae5dcc1915e1bee30b5b90286"} Jan 30 08:47:24 crc kubenswrapper[4758]: I0130 08:47:24.119797 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:24 crc kubenswrapper[4758]: W0130 08:47:24.125729 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod282c7a61_ccff_46d9_bd1b_e3d40f7e9992.slice/crio-bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e WatchSource:0}: Error finding container bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e: Status 404 returned error can't find the container with id bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e Jan 30 08:47:25 crc kubenswrapper[4758]: I0130 08:47:25.085844 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" event={"ID":"282c7a61-ccff-46d9-bd1b-e3d40f7e9992","Type":"ContainerStarted","Data":"bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e"} Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.179219 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.230158 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.231291 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.244882 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.303937 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.304137 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.304271 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.406126 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.406191 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.406326 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.407095 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.407669 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.432493 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") pod \"dnsmasq-dns-666b6646f7-lfzl4\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.567362 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.602875 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.652226 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.653841 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.662632 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.811626 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.811692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.811774 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.916019 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.916592 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.916628 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.917613 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.922334 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:26 crc kubenswrapper[4758]: I0130 08:47:26.958704 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") pod \"dnsmasq-dns-57d769cc4f-8zllc\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.070981 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.086336 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.439279 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.440706 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.447142 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.447358 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.447638 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.447894 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.448163 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8zbw4" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.448367 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.448529 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.455579 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631554 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631621 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631641 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631661 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631711 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631730 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631748 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631773 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631792 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.631813 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.656136 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.732930 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.732989 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733008 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733064 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733118 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733144 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733240 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733310 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733363 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.733387 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.734699 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.736323 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.736504 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.736591 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.737206 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.738557 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.765221 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.766284 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.767249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.791064 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.877567 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.883743 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " pod="openstack/rabbitmq-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.900154 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.901442 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.911471 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.911740 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.921417 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.921748 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nwnvg" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.921864 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.922699 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.929529 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 08:47:27 crc kubenswrapper[4758]: I0130 08:47:27.946265 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093760 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093825 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093865 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093905 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.093961 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094009 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094031 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094063 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094081 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.094104 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.101347 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.149419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" event={"ID":"e2df4666-d5a3-4445-81a2-6cf85fd83b33","Type":"ContainerStarted","Data":"b96b4fc83550ef190db47ae38eb6bf89d769a168b8ca5a2463f2d0e77a15c934"} Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.154942 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" event={"ID":"06ace92c-6051-496d-add5-845fcd6b184f","Type":"ContainerStarted","Data":"c4dbfaf7b50d7784d2c38e9516b227a07cd7a4d0e1609065c924221cceeafd2c"} Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195788 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195844 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195896 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195922 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.195979 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196000 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196202 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196243 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196589 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.196593 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.197012 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.197093 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.197124 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.197151 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.204502 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.206437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.206576 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.207838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.208590 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.209412 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.217225 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.223470 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.226820 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") pod \"rabbitmq-cell1-server-0\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.254556 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:47:28 crc kubenswrapper[4758]: I0130 08:47:28.823091 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.077891 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.146813 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.147881 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.156960 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6rqbl" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.157084 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.157189 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.161138 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.167747 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.182144 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.252767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerStarted","Data":"8f4809dc1b13b9e08c7692a160b88862bacf3c8f0bf775e5a671d7ecfeb7b0f7"} Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.257767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerStarted","Data":"2885eef36f4c15f0a02d92953e9ac8c27214898712fb29c24b72fc2ee76d019d"} Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333493 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333548 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333580 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333610 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333639 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333657 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333689 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.333736 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86ll5\" (UniqueName: \"kubernetes.io/projected/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kube-api-access-86ll5\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86ll5\" (UniqueName: \"kubernetes.io/projected/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kube-api-access-86ll5\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434583 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434611 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434658 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434682 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434726 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.434931 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.435107 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.435690 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kolla-config\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.436494 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-config-data-default\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.440179 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.465790 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.469263 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.473616 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86ll5\" (UniqueName: \"kubernetes.io/projected/1787d8b1-5b19-41e5-a66d-8375f9d5bb3f-kube-api-access-86ll5\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.493233 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f\") " pod="openstack/openstack-galera-0" Jan 30 08:47:29 crc kubenswrapper[4758]: I0130 08:47:29.783106 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.286449 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.287805 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.291370 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-p4fh8" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.291576 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.291878 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.291995 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.300503 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.394940 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397178 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnhs\" (UniqueName: \"kubernetes.io/projected/0a15517a-ff48-40d1-91b4-442bfef91fc1-kube-api-access-bwnhs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397239 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397279 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397326 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397365 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397393 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.397417 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.499263 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.500272 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506150 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnhs\" (UniqueName: \"kubernetes.io/projected/0a15517a-ff48-40d1-91b4-442bfef91fc1-kube-api-access-bwnhs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506208 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506234 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506261 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506310 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506334 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506368 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.506853 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.507871 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.508437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.509140 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.512764 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.513227 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.513342 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a15517a-ff48-40d1-91b4-442bfef91fc1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.515018 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.530563 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kdb57" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.535129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.579551 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a15517a-ff48-40d1-91b4-442bfef91fc1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.580340 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnhs\" (UniqueName: \"kubernetes.io/projected/0a15517a-ff48-40d1-91b4-442bfef91fc1-kube-api-access-bwnhs\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.598900 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0a15517a-ff48-40d1-91b4-442bfef91fc1\") " pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.617853 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.617923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.617992 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vh24\" (UniqueName: \"kubernetes.io/projected/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kube-api-access-8vh24\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.618012 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kolla-config\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.618063 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-config-data\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.638674 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.718956 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.719003 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.719055 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vh24\" (UniqueName: \"kubernetes.io/projected/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kube-api-access-8vh24\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.719079 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kolla-config\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.719103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-config-data\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.720016 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-config-data\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.721270 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kolla-config\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.747098 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.748073 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.790708 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vh24\" (UniqueName: \"kubernetes.io/projected/9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d-kube-api-access-8vh24\") pod \"memcached-0\" (UID: \"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d\") " pod="openstack/memcached-0" Jan 30 08:47:30 crc kubenswrapper[4758]: I0130 08:47:30.836393 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 08:47:31 crc kubenswrapper[4758]: I0130 08:47:31.517740 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 08:47:31 crc kubenswrapper[4758]: I0130 08:47:31.723666 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 08:47:31 crc kubenswrapper[4758]: I0130 08:47:31.896577 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.405369 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.415023 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.418963 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.451125 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4pd9v" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.478207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") pod \"kube-state-metrics-0\" (UID: \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\") " pod="openstack/kube-state-metrics-0" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.577857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a15517a-ff48-40d1-91b4-442bfef91fc1","Type":"ContainerStarted","Data":"cf475cc5d360c9c29e47bdbfb01dfa1f891f176fcaa0100edc0b6c1bb6283d95"} Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.579528 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d","Type":"ContainerStarted","Data":"08841b6a57727c7be4ea9e79802b75e9976c6628cfcd6f40426cb3244f93caf9"} Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.581278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f","Type":"ContainerStarted","Data":"ef4d61440694ddae18cf4ca34416964685192c06cbabddeff999f222b3a367fd"} Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.585149 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") pod \"kube-state-metrics-0\" (UID: \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\") " pod="openstack/kube-state-metrics-0" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.613196 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") pod \"kube-state-metrics-0\" (UID: \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\") " pod="openstack/kube-state-metrics-0" Jan 30 08:47:32 crc kubenswrapper[4758]: I0130 08:47:32.789655 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:47:33 crc kubenswrapper[4758]: I0130 08:47:33.746499 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:47:33 crc kubenswrapper[4758]: W0130 08:47:33.758367 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbb296cf_3469_43c3_9ebe_8fd1d31c00a6.slice/crio-d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7 WatchSource:0}: Error finding container d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7: Status 404 returned error can't find the container with id d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7 Jan 30 08:47:34 crc kubenswrapper[4758]: I0130 08:47:34.621030 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6","Type":"ContainerStarted","Data":"d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7"} Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.102458 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bg2b8"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.103893 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.116134 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.116173 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.116466 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pcbvq" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.133337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191199 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78294966-2fbd-4ed5-8d2a-2096ac07dac1-scripts\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191262 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-combined-ca-bundle\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191298 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2p4\" (UniqueName: \"kubernetes.io/projected/78294966-2fbd-4ed5-8d2a-2096ac07dac1-kube-api-access-kd2p4\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191325 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191346 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191379 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-ovn-controller-tls-certs\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.191420 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-log-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.212106 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9jzfn"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.217148 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.237004 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9jzfn"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294266 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-log\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzkm7\" (UniqueName: \"kubernetes.io/projected/86f64629-c944-4783-8012-7cea45690009-kube-api-access-wzkm7\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294356 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78294966-2fbd-4ed5-8d2a-2096ac07dac1-scripts\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294384 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-etc-ovs\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294403 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-combined-ca-bundle\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294431 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd2p4\" (UniqueName: \"kubernetes.io/projected/78294966-2fbd-4ed5-8d2a-2096ac07dac1-kube-api-access-kd2p4\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294449 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294464 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294487 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-ovn-controller-tls-certs\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294555 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86f64629-c944-4783-8012-7cea45690009-scripts\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294575 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-log-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294605 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-lib\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.294631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-run\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.297253 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78294966-2fbd-4ed5-8d2a-2096ac07dac1-scripts\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.300238 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.300478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-log-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.300781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/78294966-2fbd-4ed5-8d2a-2096ac07dac1-var-run-ovn\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.317451 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-ovn-controller-tls-certs\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.319953 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd2p4\" (UniqueName: \"kubernetes.io/projected/78294966-2fbd-4ed5-8d2a-2096ac07dac1-kube-api-access-kd2p4\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.322604 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78294966-2fbd-4ed5-8d2a-2096ac07dac1-combined-ca-bundle\") pod \"ovn-controller-bg2b8\" (UID: \"78294966-2fbd-4ed5-8d2a-2096ac07dac1\") " pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396000 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86f64629-c944-4783-8012-7cea45690009-scripts\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396121 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-lib\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396160 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-run\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396202 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-log\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396250 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzkm7\" (UniqueName: \"kubernetes.io/projected/86f64629-c944-4783-8012-7cea45690009-kube-api-access-wzkm7\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396285 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-etc-ovs\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.396681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-etc-ovs\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.397730 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-lib\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.397816 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-log\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.398618 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/86f64629-c944-4783-8012-7cea45690009-var-run\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.398983 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86f64629-c944-4783-8012-7cea45690009-scripts\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.419099 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzkm7\" (UniqueName: \"kubernetes.io/projected/86f64629-c944-4783-8012-7cea45690009-kube-api-access-wzkm7\") pod \"ovn-controller-ovs-9jzfn\" (UID: \"86f64629-c944-4783-8012-7cea45690009\") " pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.446832 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.539609 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.925426 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.926838 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.928861 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.929498 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.930361 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.932424 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.937192 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-788gr" Jan 30 08:47:36 crc kubenswrapper[4758]: I0130 08:47:36.942199 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020197 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020245 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020290 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020347 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020413 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.020450 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hpk\" (UniqueName: \"kubernetes.io/projected/a7ba2509-bffc-4639-9b6f-188e2a194b7a-kube-api-access-h2hpk\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121683 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121741 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121772 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121800 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.121822 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.124119 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.124408 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.124550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.124765 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.125375 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7ba2509-bffc-4639-9b6f-188e2a194b7a-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.125642 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.126659 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2hpk\" (UniqueName: \"kubernetes.io/projected/a7ba2509-bffc-4639-9b6f-188e2a194b7a-kube-api-access-h2hpk\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.135227 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.135907 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.143324 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7ba2509-bffc-4639-9b6f-188e2a194b7a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.165762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2hpk\" (UniqueName: \"kubernetes.io/projected/a7ba2509-bffc-4639-9b6f-188e2a194b7a-kube-api-access-h2hpk\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.206861 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7ba2509-bffc-4639-9b6f-188e2a194b7a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:37 crc kubenswrapper[4758]: I0130 08:47:37.267707 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.080022 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.085926 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.089109 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.089906 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-575xs" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.089990 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.093720 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.096722 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.205856 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.205900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5cns\" (UniqueName: \"kubernetes.io/projected/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-kube-api-access-c5cns\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.205955 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.205997 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.206076 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.206127 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.206145 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.206187 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-config\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307583 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307631 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5cns\" (UniqueName: \"kubernetes.io/projected/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-kube-api-access-c5cns\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307679 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307717 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307746 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.307792 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.308761 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.308826 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-config\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.308983 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.309137 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.312693 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.314287 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-config\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.315931 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.316855 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.317702 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.323916 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5cns\" (UniqueName: \"kubernetes.io/projected/0817ec2c-1c6d-4c1b-a019-3f2579ade18a-kube-api-access-c5cns\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.333098 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0817ec2c-1c6d-4c1b-a019-3f2579ade18a\") " pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:40 crc kubenswrapper[4758]: I0130 08:47:40.423367 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 08:47:48 crc kubenswrapper[4758]: E0130 08:47:48.969628 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 30 08:47:48 crc kubenswrapper[4758]: E0130 08:47:48.970359 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n677hddh589h9dh5ddh5d6h56ch65chf5h5cdh666h54ch79h66fhb4h68h677h677h5fh647h5h76h67ch56chch554hcbh64bh64fh597h685h69q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vh24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:48 crc kubenswrapper[4758]: E0130 08:47:48.971537 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d" Jan 30 08:47:49 crc kubenswrapper[4758]: E0130 08:47:49.780699 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d" Jan 30 08:47:52 crc kubenswrapper[4758]: I0130 08:47:52.390476 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:47:52 crc kubenswrapper[4758]: I0130 08:47:52.390773 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.866402 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.866858 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p297,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(89ff2fc5-609f-4ca7-b997-9f8adfa5a221): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.868909 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.873778 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.874604 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xslvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:53 crc kubenswrapper[4758]: E0130 08:47:53.875751 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" Jan 30 08:47:54 crc kubenswrapper[4758]: E0130 08:47:54.819373 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" Jan 30 08:47:54 crc kubenswrapper[4758]: E0130 08:47:54.819493 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.521697 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.522284 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86ll5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(1787d8b1-5b19-41e5-a66d-8375f9d5bb3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.523873 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="1787d8b1-5b19-41e5-a66d-8375f9d5bb3f" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.531489 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.531615 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwnhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(0a15517a-ff48-40d1-91b4-442bfef91fc1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.533674 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="0a15517a-ff48-40d1-91b4-442bfef91fc1" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.825705 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="1787d8b1-5b19-41e5-a66d-8375f9d5bb3f" Jan 30 08:47:55 crc kubenswrapper[4758]: E0130 08:47:55.826151 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="0a15517a-ff48-40d1-91b4-442bfef91fc1" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.420776 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.420953 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtj9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8vbmk_openstack(76b978eb-b181-4644-890e-e990cd2ba269): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.422555 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" podUID="76b978eb-b181-4644-890e-e990cd2ba269" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.444437 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.444712 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8g8xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-xxlqj_openstack(282c7a61-ccff-46d9-bd1b-e3d40f7e9992): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.446656 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" podUID="282c7a61-ccff-46d9-bd1b-e3d40f7e9992" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.550104 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.550679 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfwpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8zllc_openstack(06ace92c-6051-496d-add5-845fcd6b184f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.552151 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" podUID="06ace92c-6051-496d-add5-845fcd6b184f" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.558785 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.558970 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsz2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-lfzl4_openstack(e2df4666-d5a3-4445-81a2-6cf85fd83b33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.561305 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" podUID="e2df4666-d5a3-4445-81a2-6cf85fd83b33" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.834307 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" podUID="e2df4666-d5a3-4445-81a2-6cf85fd83b33" Jan 30 08:47:56 crc kubenswrapper[4758]: E0130 08:47:56.834291 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" podUID="06ace92c-6051-496d-add5-845fcd6b184f" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.046285 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.405029 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9jzfn"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.438104 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.445077 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:57 crc kubenswrapper[4758]: W0130 08:47:57.467624 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f64629_c944_4783_8012_7cea45690009.slice/crio-ae5cc7df5861ea98dce86f28d863ed841d1bb3b107c69a4743f6cc235e40a851 WatchSource:0}: Error finding container ae5cc7df5861ea98dce86f28d863ed841d1bb3b107c69a4743f6cc235e40a851: Status 404 returned error can't find the container with id ae5cc7df5861ea98dce86f28d863ed841d1bb3b107c69a4743f6cc235e40a851 Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504128 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") pod \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504201 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") pod \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504238 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") pod \"76b978eb-b181-4644-890e-e990cd2ba269\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504276 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") pod \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\" (UID: \"282c7a61-ccff-46d9-bd1b-e3d40f7e9992\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.504332 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") pod \"76b978eb-b181-4644-890e-e990cd2ba269\" (UID: \"76b978eb-b181-4644-890e-e990cd2ba269\") " Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.505473 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config" (OuterVolumeSpecName: "config") pod "76b978eb-b181-4644-890e-e990cd2ba269" (UID: "76b978eb-b181-4644-890e-e990cd2ba269"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.506088 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config" (OuterVolumeSpecName: "config") pod "282c7a61-ccff-46d9-bd1b-e3d40f7e9992" (UID: "282c7a61-ccff-46d9-bd1b-e3d40f7e9992"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.506356 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "282c7a61-ccff-46d9-bd1b-e3d40f7e9992" (UID: "282c7a61-ccff-46d9-bd1b-e3d40f7e9992"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.519688 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x" (OuterVolumeSpecName: "kube-api-access-mtj9x") pod "76b978eb-b181-4644-890e-e990cd2ba269" (UID: "76b978eb-b181-4644-890e-e990cd2ba269"). InnerVolumeSpecName "kube-api-access-mtj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.524422 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh" (OuterVolumeSpecName: "kube-api-access-8g8xh") pod "282c7a61-ccff-46d9-bd1b-e3d40f7e9992" (UID: "282c7a61-ccff-46d9-bd1b-e3d40f7e9992"). InnerVolumeSpecName "kube-api-access-8g8xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606620 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606667 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtj9x\" (UniqueName: \"kubernetes.io/projected/76b978eb-b181-4644-890e-e990cd2ba269-kube-api-access-mtj9x\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606684 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g8xh\" (UniqueName: \"kubernetes.io/projected/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-kube-api-access-8g8xh\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606696 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b978eb-b181-4644-890e-e990cd2ba269-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.606708 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/282c7a61-ccff-46d9-bd1b-e3d40f7e9992-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.837058 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8" event={"ID":"78294966-2fbd-4ed5-8d2a-2096ac07dac1","Type":"ContainerStarted","Data":"9f71368381ed4b23545b49a874f239adb7294957710f4d88131fb382aab84ece"} Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.838467 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerStarted","Data":"ae5cc7df5861ea98dce86f28d863ed841d1bb3b107c69a4743f6cc235e40a851"} Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.839379 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" event={"ID":"76b978eb-b181-4644-890e-e990cd2ba269","Type":"ContainerDied","Data":"665d5ba8bc1f090d8cef06dca81382042d1531fae5dcc1915e1bee30b5b90286"} Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.839441 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8vbmk" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.843553 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" event={"ID":"282c7a61-ccff-46d9-bd1b-e3d40f7e9992","Type":"ContainerDied","Data":"bac8612bddb729c12a6a05a6538fbeafbeb08dc4fa53c2af3518421001d9d89e"} Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.843577 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-xxlqj" Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.889953 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.902289 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8vbmk"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.932138 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:57 crc kubenswrapper[4758]: I0130 08:47:57.937363 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-xxlqj"] Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.247846 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.439463 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 08:47:58 crc kubenswrapper[4758]: W0130 08:47:58.448169 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7ba2509_bffc_4639_9b6f_188e2a194b7a.slice/crio-4d542364110c2a274586b606b41287e7767e3f8545622ca4bba3cac4544ed388 WatchSource:0}: Error finding container 4d542364110c2a274586b606b41287e7767e3f8545622ca4bba3cac4544ed388: Status 404 returned error can't find the container with id 4d542364110c2a274586b606b41287e7767e3f8545622ca4bba3cac4544ed388 Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.855527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0817ec2c-1c6d-4c1b-a019-3f2579ade18a","Type":"ContainerStarted","Data":"206886b49e89606fe3abf172131b8e9c8cba5f0c3ba7a514b2afbfb3ab939a26"} Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.857818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6","Type":"ContainerStarted","Data":"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9"} Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.857913 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.859115 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7ba2509-bffc-4639-9b6f-188e2a194b7a","Type":"ContainerStarted","Data":"4d542364110c2a274586b606b41287e7767e3f8545622ca4bba3cac4544ed388"} Jan 30 08:47:58 crc kubenswrapper[4758]: I0130 08:47:58.888163 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.250555009 podStartE2EDuration="26.888142346s" podCreationTimestamp="2026-01-30 08:47:32 +0000 UTC" firstStartedPulling="2026-01-30 08:47:33.768335443 +0000 UTC m=+1058.740646994" lastFinishedPulling="2026-01-30 08:47:58.40592278 +0000 UTC m=+1083.378234331" observedRunningTime="2026-01-30 08:47:58.874187356 +0000 UTC m=+1083.846498917" watchObservedRunningTime="2026-01-30 08:47:58.888142346 +0000 UTC m=+1083.860453897" Jan 30 08:47:59 crc kubenswrapper[4758]: I0130 08:47:59.779237 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282c7a61-ccff-46d9-bd1b-e3d40f7e9992" path="/var/lib/kubelet/pods/282c7a61-ccff-46d9-bd1b-e3d40f7e9992/volumes" Jan 30 08:47:59 crc kubenswrapper[4758]: I0130 08:47:59.779669 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b978eb-b181-4644-890e-e990cd2ba269" path="/var/lib/kubelet/pods/76b978eb-b181-4644-890e-e990cd2ba269/volumes" Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.882926 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8" event={"ID":"78294966-2fbd-4ed5-8d2a-2096ac07dac1","Type":"ContainerStarted","Data":"e5b1ce3ad16e899bd9ca345d751692237d36dd98405551005503ec69d2c0c7cb"} Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.883643 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-bg2b8" Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.885243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerStarted","Data":"d08ff3faa6719a19927863b031f6ed229b91aca2a658aa331153a7b4ed10cb35"} Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.888766 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7ba2509-bffc-4639-9b6f-188e2a194b7a","Type":"ContainerStarted","Data":"b02b095b561ac3970e9b8d3e786005a428c4cf9573cd13fb9c8eec04b187a1a6"} Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.890568 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0817ec2c-1c6d-4c1b-a019-3f2579ade18a","Type":"ContainerStarted","Data":"e2f5c88a801381ae07b3b7f5b0f543088d4149d91c5a2302cdbc42ba81fe9e9f"} Jan 30 08:48:01 crc kubenswrapper[4758]: I0130 08:48:01.917435 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bg2b8" podStartSLOduration=21.860111956 podStartE2EDuration="25.917412378s" podCreationTimestamp="2026-01-30 08:47:36 +0000 UTC" firstStartedPulling="2026-01-30 08:47:57.356210725 +0000 UTC m=+1082.328522276" lastFinishedPulling="2026-01-30 08:48:01.413511147 +0000 UTC m=+1086.385822698" observedRunningTime="2026-01-30 08:48:01.912758441 +0000 UTC m=+1086.885070002" watchObservedRunningTime="2026-01-30 08:48:01.917412378 +0000 UTC m=+1086.889723929" Jan 30 08:48:02 crc kubenswrapper[4758]: I0130 08:48:02.899307 4758 generic.go:334] "Generic (PLEG): container finished" podID="86f64629-c944-4783-8012-7cea45690009" containerID="d08ff3faa6719a19927863b031f6ed229b91aca2a658aa331153a7b4ed10cb35" exitCode=0 Jan 30 08:48:02 crc kubenswrapper[4758]: I0130 08:48:02.899466 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerDied","Data":"d08ff3faa6719a19927863b031f6ed229b91aca2a658aa331153a7b4ed10cb35"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.907914 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d","Type":"ContainerStarted","Data":"2a192c887f67c306619f4adb3e16026c0a668af211f43de5f60491116763b3ef"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.909589 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.913697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerStarted","Data":"70b33f1d69b885a2b151eefc51689f2f71208f0f536efba3c9620a5ebc490934"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.913774 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jzfn" event={"ID":"86f64629-c944-4783-8012-7cea45690009","Type":"ContainerStarted","Data":"59136f4228296109ffff66c5201c6b06b3163ec703744d2272130da3661b1543"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.914002 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.916286 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7ba2509-bffc-4639-9b6f-188e2a194b7a","Type":"ContainerStarted","Data":"dd83aa514cfbb28fd096b1875ce9ca027292debf48424631561636ab97986280"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.919566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0817ec2c-1c6d-4c1b-a019-3f2579ade18a","Type":"ContainerStarted","Data":"a0ef2f157905418976164fe4e2da020208b7f06114c84436d5486800378f5aa5"} Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.936705 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.639829342 podStartE2EDuration="33.936687377s" podCreationTimestamp="2026-01-30 08:47:30 +0000 UTC" firstStartedPulling="2026-01-30 08:47:31.948157386 +0000 UTC m=+1056.920468937" lastFinishedPulling="2026-01-30 08:48:03.245015421 +0000 UTC m=+1088.217326972" observedRunningTime="2026-01-30 08:48:03.931446532 +0000 UTC m=+1088.903758103" watchObservedRunningTime="2026-01-30 08:48:03.936687377 +0000 UTC m=+1088.908998928" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.958297 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9jzfn" podStartSLOduration=24.055215684 podStartE2EDuration="27.958277368s" podCreationTimestamp="2026-01-30 08:47:36 +0000 UTC" firstStartedPulling="2026-01-30 08:47:57.474267351 +0000 UTC m=+1082.446578902" lastFinishedPulling="2026-01-30 08:48:01.377329035 +0000 UTC m=+1086.349640586" observedRunningTime="2026-01-30 08:48:03.954887601 +0000 UTC m=+1088.927199172" watchObservedRunningTime="2026-01-30 08:48:03.958277368 +0000 UTC m=+1088.930588929" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.980574 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.29041172 podStartE2EDuration="28.980555481s" podCreationTimestamp="2026-01-30 08:47:35 +0000 UTC" firstStartedPulling="2026-01-30 08:47:58.450045352 +0000 UTC m=+1083.422356903" lastFinishedPulling="2026-01-30 08:48:03.140189113 +0000 UTC m=+1088.112500664" observedRunningTime="2026-01-30 08:48:03.978401193 +0000 UTC m=+1088.950712754" watchObservedRunningTime="2026-01-30 08:48:03.980555481 +0000 UTC m=+1088.952867032" Jan 30 08:48:03 crc kubenswrapper[4758]: I0130 08:48:03.999349 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=20.26313155 podStartE2EDuration="24.999327594s" podCreationTimestamp="2026-01-30 08:47:39 +0000 UTC" firstStartedPulling="2026-01-30 08:47:58.396766471 +0000 UTC m=+1083.369078022" lastFinishedPulling="2026-01-30 08:48:03.132962515 +0000 UTC m=+1088.105274066" observedRunningTime="2026-01-30 08:48:03.994443669 +0000 UTC m=+1088.966755250" watchObservedRunningTime="2026-01-30 08:48:03.999327594 +0000 UTC m=+1088.971639145" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.269090 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.305599 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.424066 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.458702 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.926211 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.926905 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 08:48:04 crc kubenswrapper[4758]: I0130 08:48:04.926929 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:48:06 crc kubenswrapper[4758]: I0130 08:48:06.974708 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.280713 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.347431 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.348904 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.355762 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.361940 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.372791 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.465803 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-w9zqv"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.472629 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.478253 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w9zqv"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.478357 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.500961 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.501082 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.501158 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.501174 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606067 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl2lm\" (UniqueName: \"kubernetes.io/projected/1c89386d-e6bb-45b3-bd95-970270275127-kube-api-access-xl2lm\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606125 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606171 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c89386d-e6bb-45b3-bd95-970270275127-config\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606198 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovn-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606445 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606505 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-combined-ca-bundle\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606536 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606585 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.606642 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovs-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.607318 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.608054 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.608346 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.639143 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") pod \"dnsmasq-dns-6bc7876d45-nbwlv\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.695622 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.704238 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.710631 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c89386d-e6bb-45b3-bd95-970270275127-config\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.711381 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c89386d-e6bb-45b3-bd95-970270275127-config\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.711515 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovn-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.711800 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovn-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.711917 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-combined-ca-bundle\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.712643 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovs-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.712715 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl2lm\" (UniqueName: \"kubernetes.io/projected/1c89386d-e6bb-45b3-bd95-970270275127-kube-api-access-xl2lm\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.712763 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.712873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c89386d-e6bb-45b3-bd95-970270275127-ovs-rundir\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.715496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-combined-ca-bundle\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.722716 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c89386d-e6bb-45b3-bd95-970270275127-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.735316 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl2lm\" (UniqueName: \"kubernetes.io/projected/1c89386d-e6bb-45b3-bd95-970270275127-kube-api-access-xl2lm\") pod \"ovn-controller-metrics-w9zqv\" (UID: \"1c89386d-e6bb-45b3-bd95-970270275127\") " pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.813892 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w9zqv" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.814503 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") pod \"06ace92c-6051-496d-add5-845fcd6b184f\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.814752 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") pod \"06ace92c-6051-496d-add5-845fcd6b184f\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.814774 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") pod \"06ace92c-6051-496d-add5-845fcd6b184f\" (UID: \"06ace92c-6051-496d-add5-845fcd6b184f\") " Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.818467 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06ace92c-6051-496d-add5-845fcd6b184f" (UID: "06ace92c-6051-496d-add5-845fcd6b184f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.818962 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config" (OuterVolumeSpecName: "config") pod "06ace92c-6051-496d-add5-845fcd6b184f" (UID: "06ace92c-6051-496d-add5-845fcd6b184f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.821816 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj" (OuterVolumeSpecName: "kube-api-access-jfwpj") pod "06ace92c-6051-496d-add5-845fcd6b184f" (UID: "06ace92c-6051-496d-add5-845fcd6b184f"). InnerVolumeSpecName "kube-api-access-jfwpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.823331 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.823362 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfwpj\" (UniqueName: \"kubernetes.io/projected/06ace92c-6051-496d-add5-845fcd6b184f-kube-api-access-jfwpj\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.823372 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06ace92c-6051-496d-add5-845fcd6b184f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.934817 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.954441 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.955932 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.962729 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.977659 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.985311 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.986816 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.993687 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" event={"ID":"06ace92c-6051-496d-add5-845fcd6b184f","Type":"ContainerDied","Data":"c4dbfaf7b50d7784d2c38e9516b227a07cd7a4d0e1609065c924221cceeafd2c"} Jan 30 08:48:07 crc kubenswrapper[4758]: I0130 08:48:07.993815 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8zllc" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:07.997798 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:07.997958 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:07.998217 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:07.998398 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-g55gk" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.025643 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f","Type":"ContainerStarted","Data":"e1f2f27ac70e2a7b40c793c7484fab4935efde62267c026bab00e4506643346f"} Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.031406 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.031464 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.031491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.041448 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.041508 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.109270 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.142902 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-scripts\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.142975 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143025 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143067 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143191 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143279 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143296 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143316 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143335 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143352 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn59q\" (UniqueName: \"kubernetes.io/projected/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-kube-api-access-bn59q\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.143380 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-config\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.144896 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.145884 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.149507 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.175926 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.178114 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.202547 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") pod \"dnsmasq-dns-8554648995-fz6p5\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.203314 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8zllc"] Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245177 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-scripts\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245290 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245359 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245396 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245426 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn59q\" (UniqueName: \"kubernetes.io/projected/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-kube-api-access-bn59q\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.245447 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-config\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.246257 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-config\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.246814 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-scripts\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.247570 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.253684 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.265129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.268710 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.280684 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn59q\" (UniqueName: \"kubernetes.io/projected/831717ab-1273-408c-9fdf-4cd5bd2d2bb9-kube-api-access-bn59q\") pod \"ovn-northd-0\" (UID: \"831717ab-1273-408c-9fdf-4cd5bd2d2bb9\") " pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.329366 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.342381 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.544881 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:08 crc kubenswrapper[4758]: I0130 08:48:08.740120 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w9zqv"] Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.006177 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.035551 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerStarted","Data":"82d07fd150728b7c82f069c87ee62a0a60661daf49e51d27a49bbc40f9ce2ee9"} Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.041688 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w9zqv" event={"ID":"1c89386d-e6bb-45b3-bd95-970270275127","Type":"ContainerStarted","Data":"d1fc649238fe21b9dcbf8bcbb9d3c9fa7d9810f19f47d42798530f205349c409"} Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.058150 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" event={"ID":"e2df4666-d5a3-4445-81a2-6cf85fd83b33","Type":"ContainerDied","Data":"b96b4fc83550ef190db47ae38eb6bf89d769a168b8ca5a2463f2d0e77a15c934"} Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.058249 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lfzl4" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.060136 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a15517a-ff48-40d1-91b4-442bfef91fc1","Type":"ContainerStarted","Data":"a329555e5c83c8e76ae3384416532d1a40534ec90749967ee7e67ead47128aa7"} Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.061304 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.063054 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") pod \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.063116 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") pod \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.063175 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") pod \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\" (UID: \"e2df4666-d5a3-4445-81a2-6cf85fd83b33\") " Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.063868 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2df4666-d5a3-4445-81a2-6cf85fd83b33" (UID: "e2df4666-d5a3-4445-81a2-6cf85fd83b33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.064345 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config" (OuterVolumeSpecName: "config") pod "e2df4666-d5a3-4445-81a2-6cf85fd83b33" (UID: "e2df4666-d5a3-4445-81a2-6cf85fd83b33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.064795 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.064815 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2df4666-d5a3-4445-81a2-6cf85fd83b33-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.075992 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x" (OuterVolumeSpecName: "kube-api-access-gsz2x") pod "e2df4666-d5a3-4445-81a2-6cf85fd83b33" (UID: "e2df4666-d5a3-4445-81a2-6cf85fd83b33"). InnerVolumeSpecName "kube-api-access-gsz2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.166278 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsz2x\" (UniqueName: \"kubernetes.io/projected/e2df4666-d5a3-4445-81a2-6cf85fd83b33-kube-api-access-gsz2x\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.172949 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 08:48:09 crc kubenswrapper[4758]: W0130 08:48:09.184465 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod831717ab_1273_408c_9fdf_4cd5bd2d2bb9.slice/crio-d0c95453112924b07209d07b55cab47a407ba6020f621580ef7559e63021ec9d WatchSource:0}: Error finding container d0c95453112924b07209d07b55cab47a407ba6020f621580ef7559e63021ec9d: Status 404 returned error can't find the container with id d0c95453112924b07209d07b55cab47a407ba6020f621580ef7559e63021ec9d Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.632000 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.639489 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lfzl4"] Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.793604 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ace92c-6051-496d-add5-845fcd6b184f" path="/var/lib/kubelet/pods/06ace92c-6051-496d-add5-845fcd6b184f/volumes" Jan 30 08:48:09 crc kubenswrapper[4758]: I0130 08:48:09.794320 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2df4666-d5a3-4445-81a2-6cf85fd83b33" path="/var/lib/kubelet/pods/e2df4666-d5a3-4445-81a2-6cf85fd83b33/volumes" Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.068462 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"831717ab-1273-408c-9fdf-4cd5bd2d2bb9","Type":"ContainerStarted","Data":"d0c95453112924b07209d07b55cab47a407ba6020f621580ef7559e63021ec9d"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.071151 4758 generic.go:334] "Generic (PLEG): container finished" podID="41049534-cd80-47a3-b923-be969750a8b9" containerID="b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105" exitCode=0 Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.071213 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerDied","Data":"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.072027 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerStarted","Data":"31513955cec6f870378becf0d67edc5a8694ab95bcb31c44777622788df86a4b"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.078543 4758 generic.go:334] "Generic (PLEG): container finished" podID="c1232df3-7af1-411f-9116-519761089007" containerID="b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c" exitCode=0 Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.078598 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerDied","Data":"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.080815 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w9zqv" event={"ID":"1c89386d-e6bb-45b3-bd95-970270275127","Type":"ContainerStarted","Data":"00d40b09d9c85c1ba861c38b21bc369c538037b627c22d2aa32f28f7a604e474"} Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.121716 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-w9zqv" podStartSLOduration=3.12169539 podStartE2EDuration="3.12169539s" podCreationTimestamp="2026-01-30 08:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:10.118672094 +0000 UTC m=+1095.090983645" watchObservedRunningTime="2026-01-30 08:48:10.12169539 +0000 UTC m=+1095.094006941" Jan 30 08:48:10 crc kubenswrapper[4758]: I0130 08:48:10.838220 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.093803 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"831717ab-1273-408c-9fdf-4cd5bd2d2bb9","Type":"ContainerStarted","Data":"b6d71226bb5391ed4462b44777e5829f5e784ffb42e69da0cacedd758dabe5c9"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.096027 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerStarted","Data":"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.096849 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.099336 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerStarted","Data":"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.099472 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.102138 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerStarted","Data":"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.105126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerStarted","Data":"858a4cc294f2673581b5056b6b3f2795b013fb5990368406beb8a506660b666f"} Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.155509 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" podStartSLOduration=3.625828727 podStartE2EDuration="4.155488811s" podCreationTimestamp="2026-01-30 08:48:07 +0000 UTC" firstStartedPulling="2026-01-30 08:48:08.576137578 +0000 UTC m=+1093.548449129" lastFinishedPulling="2026-01-30 08:48:09.105797662 +0000 UTC m=+1094.078109213" observedRunningTime="2026-01-30 08:48:11.150957838 +0000 UTC m=+1096.123269399" watchObservedRunningTime="2026-01-30 08:48:11.155488811 +0000 UTC m=+1096.127800362" Jan 30 08:48:11 crc kubenswrapper[4758]: I0130 08:48:11.156362 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-fz6p5" podStartSLOduration=4.156355899 podStartE2EDuration="4.156355899s" podCreationTimestamp="2026-01-30 08:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:11.13197857 +0000 UTC m=+1096.104290121" watchObservedRunningTime="2026-01-30 08:48:11.156355899 +0000 UTC m=+1096.128667450" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.114121 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"831717ab-1273-408c-9fdf-4cd5bd2d2bb9","Type":"ContainerStarted","Data":"09928b1251f5b16afbf883691fc0a650ba9ae547429f579784014910fa6b9200"} Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.149886 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.788833912 podStartE2EDuration="5.149846389s" podCreationTimestamp="2026-01-30 08:48:07 +0000 UTC" firstStartedPulling="2026-01-30 08:48:09.186903762 +0000 UTC m=+1094.159215313" lastFinishedPulling="2026-01-30 08:48:10.547916239 +0000 UTC m=+1095.520227790" observedRunningTime="2026-01-30 08:48:12.146538234 +0000 UTC m=+1097.118849805" watchObservedRunningTime="2026-01-30 08:48:12.149846389 +0000 UTC m=+1097.122157960" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.761632 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.801336 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.802953 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.823912 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.853561 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943498 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943678 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:12 crc kubenswrapper[4758]: I0130 08:48:12.943723 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045457 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045534 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045570 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045598 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.045970 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.046388 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.046436 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.046702 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.047248 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.064022 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") pod \"dnsmasq-dns-b8fbc5445-98tgp\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.125496 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.131483 4758 generic.go:334] "Generic (PLEG): container finished" podID="1787d8b1-5b19-41e5-a66d-8375f9d5bb3f" containerID="e1f2f27ac70e2a7b40c793c7484fab4935efde62267c026bab00e4506643346f" exitCode=0 Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.131538 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f","Type":"ContainerDied","Data":"e1f2f27ac70e2a7b40c793c7484fab4935efde62267c026bab00e4506643346f"} Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.136095 4758 generic.go:334] "Generic (PLEG): container finished" podID="0a15517a-ff48-40d1-91b4-442bfef91fc1" containerID="a329555e5c83c8e76ae3384416532d1a40534ec90749967ee7e67ead47128aa7" exitCode=0 Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.136650 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="dnsmasq-dns" containerID="cri-o://7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" gracePeriod=10 Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.136149 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a15517a-ff48-40d1-91b4-442bfef91fc1","Type":"ContainerDied","Data":"a329555e5c83c8e76ae3384416532d1a40534ec90749967ee7e67ead47128aa7"} Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.137193 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.618982 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.632017 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:48:13 crc kubenswrapper[4758]: W0130 08:48:13.643176 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05274cdb_49de_4144_85ca_3d46e1790dab.slice/crio-f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded WatchSource:0}: Error finding container f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded: Status 404 returned error can't find the container with id f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.771321 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") pod \"c1232df3-7af1-411f-9116-519761089007\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.771619 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") pod \"c1232df3-7af1-411f-9116-519761089007\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.771674 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") pod \"c1232df3-7af1-411f-9116-519761089007\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.771840 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") pod \"c1232df3-7af1-411f-9116-519761089007\" (UID: \"c1232df3-7af1-411f-9116-519761089007\") " Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.786680 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl" (OuterVolumeSpecName: "kube-api-access-skkbl") pod "c1232df3-7af1-411f-9116-519761089007" (UID: "c1232df3-7af1-411f-9116-519761089007"). InnerVolumeSpecName "kube-api-access-skkbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.863093 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1232df3-7af1-411f-9116-519761089007" (UID: "c1232df3-7af1-411f-9116-519761089007"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.876188 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skkbl\" (UniqueName: \"kubernetes.io/projected/c1232df3-7af1-411f-9116-519761089007-kube-api-access-skkbl\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.876432 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.890869 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1232df3-7af1-411f-9116-519761089007" (UID: "c1232df3-7af1-411f-9116-519761089007"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.895281 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config" (OuterVolumeSpecName: "config") pod "c1232df3-7af1-411f-9116-519761089007" (UID: "c1232df3-7af1-411f-9116-519761089007"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.977956 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.978223 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1232df3-7af1-411f-9116-519761089007-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.991410 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:48:13 crc kubenswrapper[4758]: E0130 08:48:13.991725 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="init" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.991740 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="init" Jan 30 08:48:13 crc kubenswrapper[4758]: E0130 08:48:13.991768 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="dnsmasq-dns" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.991792 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="dnsmasq-dns" Jan 30 08:48:13 crc kubenswrapper[4758]: I0130 08:48:13.991958 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1232df3-7af1-411f-9116-519761089007" containerName="dnsmasq-dns" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.002228 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.005219 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.005546 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.006322 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kt9hk" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.011938 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.023898 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-lock\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079441 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6fs9\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-kube-api-access-z6fs9\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079479 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079501 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-cache\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.079531 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978baf9-b7c0-4d25-8bca-e95a018ba2af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.162358 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a15517a-ff48-40d1-91b4-442bfef91fc1","Type":"ContainerStarted","Data":"e4d0eee0ac548178b87635cf87a3ec675a9b0ac0b019cf2ca01ee8b2beb1a802"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.167089 4758 generic.go:334] "Generic (PLEG): container finished" podID="c1232df3-7af1-411f-9116-519761089007" containerID="7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" exitCode=0 Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.167412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerDied","Data":"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.167619 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" event={"ID":"c1232df3-7af1-411f-9116-519761089007","Type":"ContainerDied","Data":"82d07fd150728b7c82f069c87ee62a0a60661daf49e51d27a49bbc40f9ce2ee9"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.167927 4758 scope.go:117] "RemoveContainer" containerID="7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.169838 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-nbwlv" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.171249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerStarted","Data":"04574457799aeff7875482dc2fc9ef4ba7fb15bf31c64fb74962ff0458496cd3"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.171312 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerStarted","Data":"f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.180852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1787d8b1-5b19-41e5-a66d-8375f9d5bb3f","Type":"ContainerStarted","Data":"dedea668e8b90301979f85e0e7e116a2091b284d9505a7c83aca302a2a909be8"} Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187719 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978baf9-b7c0-4d25-8bca-e95a018ba2af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187793 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-lock\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187873 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187924 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6fs9\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-kube-api-access-z6fs9\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187950 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.187976 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-cache\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.188478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-cache\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.189089 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f978baf9-b7c0-4d25-8bca-e95a018ba2af-lock\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.189097 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.189390 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.189410 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.189468 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:14.68944029 +0000 UTC m=+1099.661751842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.205464 4758 scope.go:117] "RemoveContainer" containerID="b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.224659 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371991.630268 podStartE2EDuration="45.224507457s" podCreationTimestamp="2026-01-30 08:47:29 +0000 UTC" firstStartedPulling="2026-01-30 08:47:31.709367371 +0000 UTC m=+1056.681678922" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:14.198003581 +0000 UTC m=+1099.170315142" watchObservedRunningTime="2026-01-30 08:48:14.224507457 +0000 UTC m=+1099.196819008" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.224954 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f978baf9-b7c0-4d25-8bca-e95a018ba2af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.232838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6fs9\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-kube-api-access-z6fs9\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.281755 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.652890171 podStartE2EDuration="46.281729913s" podCreationTimestamp="2026-01-30 08:47:28 +0000 UTC" firstStartedPulling="2026-01-30 08:47:31.576404456 +0000 UTC m=+1056.548716007" lastFinishedPulling="2026-01-30 08:48:07.205244198 +0000 UTC m=+1092.177555749" observedRunningTime="2026-01-30 08:48:14.278914144 +0000 UTC m=+1099.251225695" watchObservedRunningTime="2026-01-30 08:48:14.281729913 +0000 UTC m=+1099.254041464" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.283879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.284142 4758 scope.go:117] "RemoveContainer" containerID="7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.285674 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd\": container with ID starting with 7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd not found: ID does not exist" containerID="7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.285729 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd"} err="failed to get container status \"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd\": rpc error: code = NotFound desc = could not find container \"7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd\": container with ID starting with 7ccbbc25ec1d663eaae65cd6138777ef765bc381b119ab0d1877de94ebb446bd not found: ID does not exist" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.285752 4758 scope.go:117] "RemoveContainer" containerID="b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c" Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.286152 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c\": container with ID starting with b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c not found: ID does not exist" containerID="b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.286269 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c"} err="failed to get container status \"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c\": rpc error: code = NotFound desc = could not find container \"b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c\": container with ID starting with b91f063becaf6302a4a1edfe696c3b865cbce279f542ac6448c8bc31a319db1c not found: ID does not exist" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.345259 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.352427 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-nbwlv"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.530168 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.531445 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.542236 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.542929 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.543451 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.596225 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.596490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.596982 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603098 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603383 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.603764 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.605745 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.607142 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-9gsfj ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-pwl7q" podUID="3da668f4-93c9-4c1a-b2b7-05ad516c0637" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705252 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705311 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705370 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705397 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705440 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705461 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705494 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.705943 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.706006 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.706478 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.707284 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.707418 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: E0130 08:48:14.707623 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:15.707598251 +0000 UTC m=+1100.679909862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.710509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.710901 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.711829 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:14 crc kubenswrapper[4758]: I0130 08:48:14.729702 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") pod \"swift-ring-rebalance-pwl7q\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.187808 4758 generic.go:334] "Generic (PLEG): container finished" podID="05274cdb-49de-4144-85ca-3d46e1790dab" containerID="04574457799aeff7875482dc2fc9ef4ba7fb15bf31c64fb74962ff0458496cd3" exitCode=0 Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.187866 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerDied","Data":"04574457799aeff7875482dc2fc9ef4ba7fb15bf31c64fb74962ff0458496cd3"} Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.190726 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.284718 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417343 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417655 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417738 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417764 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417831 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417870 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.417893 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") pod \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\" (UID: \"3da668f4-93c9-4c1a-b2b7-05ad516c0637\") " Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.418801 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts" (OuterVolumeSpecName: "scripts") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.419726 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.419969 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.423916 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.423998 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.429589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.432793 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj" (OuterVolumeSpecName: "kube-api-access-9gsfj") pod "3da668f4-93c9-4c1a-b2b7-05ad516c0637" (UID: "3da668f4-93c9-4c1a-b2b7-05ad516c0637"). InnerVolumeSpecName "kube-api-access-9gsfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520160 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gsfj\" (UniqueName: \"kubernetes.io/projected/3da668f4-93c9-4c1a-b2b7-05ad516c0637-kube-api-access-9gsfj\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520200 4758 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520211 4758 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520223 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da668f4-93c9-4c1a-b2b7-05ad516c0637-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520234 4758 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3da668f4-93c9-4c1a-b2b7-05ad516c0637-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520245 4758 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.520256 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3da668f4-93c9-4c1a-b2b7-05ad516c0637-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.723416 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:15 crc kubenswrapper[4758]: E0130 08:48:15.723641 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:15 crc kubenswrapper[4758]: E0130 08:48:15.723675 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:15 crc kubenswrapper[4758]: E0130 08:48:15.723744 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:17.723720456 +0000 UTC m=+1102.696032007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:15 crc kubenswrapper[4758]: I0130 08:48:15.778509 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1232df3-7af1-411f-9116-519761089007" path="/var/lib/kubelet/pods/c1232df3-7af1-411f-9116-519761089007/volumes" Jan 30 08:48:16 crc kubenswrapper[4758]: I0130 08:48:16.198690 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-pwl7q" Jan 30 08:48:16 crc kubenswrapper[4758]: I0130 08:48:16.198694 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerStarted","Data":"fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201"} Jan 30 08:48:16 crc kubenswrapper[4758]: I0130 08:48:16.239330 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:16 crc kubenswrapper[4758]: I0130 08:48:16.248619 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-pwl7q"] Jan 30 08:48:17 crc kubenswrapper[4758]: I0130 08:48:17.205578 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:17 crc kubenswrapper[4758]: I0130 08:48:17.753261 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:17 crc kubenswrapper[4758]: E0130 08:48:17.753445 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:17 crc kubenswrapper[4758]: E0130 08:48:17.753462 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:17 crc kubenswrapper[4758]: E0130 08:48:17.753513 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:21.753497785 +0000 UTC m=+1106.725809336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:17 crc kubenswrapper[4758]: I0130 08:48:17.777544 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da668f4-93c9-4c1a-b2b7-05ad516c0637" path="/var/lib/kubelet/pods/3da668f4-93c9-4c1a-b2b7-05ad516c0637/volumes" Jan 30 08:48:18 crc kubenswrapper[4758]: I0130 08:48:18.331184 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:18 crc kubenswrapper[4758]: I0130 08:48:18.352378 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" podStartSLOduration=6.3523560660000005 podStartE2EDuration="6.352356066s" podCreationTimestamp="2026-01-30 08:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:17.233651754 +0000 UTC m=+1102.205963305" watchObservedRunningTime="2026-01-30 08:48:18.352356066 +0000 UTC m=+1103.324667617" Jan 30 08:48:19 crc kubenswrapper[4758]: I0130 08:48:19.784801 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 08:48:19 crc kubenswrapper[4758]: I0130 08:48:19.785167 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 08:48:20 crc kubenswrapper[4758]: I0130 08:48:20.639488 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 08:48:20 crc kubenswrapper[4758]: I0130 08:48:20.639532 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 08:48:21 crc kubenswrapper[4758]: I0130 08:48:21.551866 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 08:48:21 crc kubenswrapper[4758]: I0130 08:48:21.632797 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 08:48:21 crc kubenswrapper[4758]: I0130 08:48:21.817937 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:21 crc kubenswrapper[4758]: E0130 08:48:21.818228 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:21 crc kubenswrapper[4758]: E0130 08:48:21.818260 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:21 crc kubenswrapper[4758]: E0130 08:48:21.818322 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:29.818302092 +0000 UTC m=+1114.790613643 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:22 crc kubenswrapper[4758]: I0130 08:48:22.386959 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:48:22 crc kubenswrapper[4758]: I0130 08:48:22.387024 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.130253 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.213370 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.213610 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-fz6p5" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" containerID="cri-o://b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" gracePeriod=10 Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.330503 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-fz6p5" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.724696 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.768999 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.769079 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.769152 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.769177 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.769218 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") pod \"41049534-cd80-47a3-b923-be969750a8b9\" (UID: \"41049534-cd80-47a3-b923-be969750a8b9\") " Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.799949 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs" (OuterVolumeSpecName: "kube-api-access-nmzbs") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "kube-api-access-nmzbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.817966 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.826376 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.827971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config" (OuterVolumeSpecName: "config") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.840726 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41049534-cd80-47a3-b923-be969750a8b9" (UID: "41049534-cd80-47a3-b923-be969750a8b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.869796 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873293 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmzbs\" (UniqueName: \"kubernetes.io/projected/41049534-cd80-47a3-b923-be969750a8b9-kube-api-access-nmzbs\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873331 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873343 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873352 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.873361 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41049534-cd80-47a3-b923-be969750a8b9-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:23 crc kubenswrapper[4758]: I0130 08:48:23.946649 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.258848 4758 generic.go:334] "Generic (PLEG): container finished" podID="41049534-cd80-47a3-b923-be969750a8b9" containerID="b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" exitCode=0 Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.258943 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerDied","Data":"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24"} Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.258999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fz6p5" event={"ID":"41049534-cd80-47a3-b923-be969750a8b9","Type":"ContainerDied","Data":"31513955cec6f870378becf0d67edc5a8694ab95bcb31c44777622788df86a4b"} Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.259024 4758 scope.go:117] "RemoveContainer" containerID="b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.259901 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fz6p5" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.294745 4758 scope.go:117] "RemoveContainer" containerID="b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.309173 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.315973 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fz6p5"] Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.332625 4758 scope.go:117] "RemoveContainer" containerID="b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" Jan 30 08:48:24 crc kubenswrapper[4758]: E0130 08:48:24.333160 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24\": container with ID starting with b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24 not found: ID does not exist" containerID="b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.333195 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24"} err="failed to get container status \"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24\": rpc error: code = NotFound desc = could not find container \"b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24\": container with ID starting with b3edfd0edec82652350dab015c099e9be6da38aac8a1c03f552b514f7d231d24 not found: ID does not exist" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.333228 4758 scope.go:117] "RemoveContainer" containerID="b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105" Jan 30 08:48:24 crc kubenswrapper[4758]: E0130 08:48:24.333491 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105\": container with ID starting with b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105 not found: ID does not exist" containerID="b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105" Jan 30 08:48:24 crc kubenswrapper[4758]: I0130 08:48:24.333560 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105"} err="failed to get container status \"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105\": rpc error: code = NotFound desc = could not find container \"b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105\": container with ID starting with b272e0b276fba2dceace81b2f86fd8e6c241393699d2f5801ba502638ea95105 not found: ID does not exist" Jan 30 08:48:25 crc kubenswrapper[4758]: I0130 08:48:25.778323 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41049534-cd80-47a3-b923-be969750a8b9" path="/var/lib/kubelet/pods/41049534-cd80-47a3-b923-be969750a8b9/volumes" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.174119 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:28 crc kubenswrapper[4758]: E0130 08:48:28.174862 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.174884 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" Jan 30 08:48:28 crc kubenswrapper[4758]: E0130 08:48:28.174937 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="init" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.174946 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="init" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.175174 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="41049534-cd80-47a3-b923-be969750a8b9" containerName="dnsmasq-dns" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.175802 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.177704 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.192969 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.243071 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.243112 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.344308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.344400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.346225 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.372381 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") pod \"root-account-create-update-cps5h\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.409237 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.495098 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:28 crc kubenswrapper[4758]: I0130 08:48:28.968752 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:29 crc kubenswrapper[4758]: I0130 08:48:29.292375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cps5h" event={"ID":"88b3ba93-ce04-4374-8fed-db25eb3b3065","Type":"ContainerStarted","Data":"183015e47f5f83a8e3749301c9da8e5904ca92fc6fc6359bf8ab059b1e015b60"} Jan 30 08:48:29 crc kubenswrapper[4758]: I0130 08:48:29.292661 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cps5h" event={"ID":"88b3ba93-ce04-4374-8fed-db25eb3b3065","Type":"ContainerStarted","Data":"8e847bc505b4253fd572f62d30b4950f0e0d038f85122290c011602277ae9076"} Jan 30 08:48:29 crc kubenswrapper[4758]: I0130 08:48:29.309498 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cps5h" podStartSLOduration=1.309474818 podStartE2EDuration="1.309474818s" podCreationTimestamp="2026-01-30 08:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:29.307154454 +0000 UTC m=+1114.279466025" watchObservedRunningTime="2026-01-30 08:48:29.309474818 +0000 UTC m=+1114.281786369" Jan 30 08:48:29 crc kubenswrapper[4758]: I0130 08:48:29.869162 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:29 crc kubenswrapper[4758]: E0130 08:48:29.869393 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:29 crc kubenswrapper[4758]: E0130 08:48:29.869427 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:29 crc kubenswrapper[4758]: E0130 08:48:29.869489 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:48:45.869471404 +0000 UTC m=+1130.841782955 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.300143 4758 generic.go:334] "Generic (PLEG): container finished" podID="88b3ba93-ce04-4374-8fed-db25eb3b3065" containerID="183015e47f5f83a8e3749301c9da8e5904ca92fc6fc6359bf8ab059b1e015b60" exitCode=0 Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.300184 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cps5h" event={"ID":"88b3ba93-ce04-4374-8fed-db25eb3b3065","Type":"ContainerDied","Data":"183015e47f5f83a8e3749301c9da8e5904ca92fc6fc6359bf8ab059b1e015b60"} Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.406210 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.407130 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.453617 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.478723 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.478799 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.580191 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.580564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.581517 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.612164 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") pod \"keystone-db-create-8r78c\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.614191 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.615154 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.619222 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.635863 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.682384 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.683789 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.738886 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.786003 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.786132 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.786879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.809529 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") pod \"keystone-d642-account-create-update-zwktw\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.959181 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.960900 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.972509 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:48:30 crc kubenswrapper[4758]: I0130 08:48:30.984413 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.068601 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.079735 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.086178 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.087106 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.089432 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.096551 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.103370 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.103451 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.115490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207408 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207629 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207668 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207757 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.207802 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.212328 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.222959 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.224017 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.228353 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") pod \"placement-db-create-vv7kf\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.229239 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.236870 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.277353 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.292123 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310471 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310563 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310670 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.310937 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.311108 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.312227 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.312726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.332716 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") pod \"glance-db-create-2wfwb\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.333418 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") pod \"placement-72be-account-create-update-gphrs\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.336706 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8r78c" event={"ID":"a04121a4-3ab4-48c5-903e-8d0002771ff7","Type":"ContainerStarted","Data":"e494bb189e2d0ac5e4da8332fef46a9f2687ebe52e965755cf5c04ba2a6d0947"} Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.413689 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.413760 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.414883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.420965 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.425052 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.431821 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") pod \"glance-50ef-account-create-update-bskhv\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.517964 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bg2b8" podUID="78294966-2fbd-4ed5-8d2a-2096ac07dac1" containerName="ovn-controller" probeResult="failure" output=< Jan 30 08:48:31 crc kubenswrapper[4758]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 08:48:31 crc kubenswrapper[4758]: > Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.561435 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.596533 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:48:31 crc kubenswrapper[4758]: W0130 08:48:31.621199 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod585a6b3e_695a_4ddc_91a6_9b39b241ffd0.slice/crio-5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6 WatchSource:0}: Error finding container 5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6: Status 404 returned error can't find the container with id 5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6 Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.871469 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:48:31 crc kubenswrapper[4758]: I0130 08:48:31.964222 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.038004 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") pod \"88b3ba93-ce04-4374-8fed-db25eb3b3065\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.038095 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") pod \"88b3ba93-ce04-4374-8fed-db25eb3b3065\" (UID: \"88b3ba93-ce04-4374-8fed-db25eb3b3065\") " Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.039829 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88b3ba93-ce04-4374-8fed-db25eb3b3065" (UID: "88b3ba93-ce04-4374-8fed-db25eb3b3065"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.045598 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx" (OuterVolumeSpecName: "kube-api-access-6kxjx") pod "88b3ba93-ce04-4374-8fed-db25eb3b3065" (UID: "88b3ba93-ce04-4374-8fed-db25eb3b3065"). InnerVolumeSpecName "kube-api-access-6kxjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.141429 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3ba93-ce04-4374-8fed-db25eb3b3065-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.141468 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kxjx\" (UniqueName: \"kubernetes.io/projected/88b3ba93-ce04-4374-8fed-db25eb3b3065-kube-api-access-6kxjx\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.196049 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:48:32 crc kubenswrapper[4758]: W0130 08:48:32.200930 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2121970a_59f7_4943_ac1e_fa675e5eef8e.slice/crio-145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5 WatchSource:0}: Error finding container 145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5: Status 404 returned error can't find the container with id 145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5 Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.211840 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.353434 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2wfwb" event={"ID":"2f506e0d-da4f-4243-923b-f7e102fafd92","Type":"ContainerStarted","Data":"61116589352b40224a96215bd65987be6542324de26a624da86a1ed3cc2be782"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.355739 4758 generic.go:334] "Generic (PLEG): container finished" podID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" containerID="a6e4fb8d0b259571728fd753067c06a5a86fb0519ae153af2e5b3b44c3d06d59" exitCode=0 Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.355800 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d642-account-create-update-zwktw" event={"ID":"585a6b3e-695a-4ddc-91a6-9b39b241ffd0","Type":"ContainerDied","Data":"a6e4fb8d0b259571728fd753067c06a5a86fb0519ae153af2e5b3b44c3d06d59"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.355825 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d642-account-create-update-zwktw" event={"ID":"585a6b3e-695a-4ddc-91a6-9b39b241ffd0","Type":"ContainerStarted","Data":"5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.359514 4758 generic.go:334] "Generic (PLEG): container finished" podID="a04121a4-3ab4-48c5-903e-8d0002771ff7" containerID="26c6c01e8de6266c91045a1615b456f02f5eaff630949bebc138b39fd169f580" exitCode=0 Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.359591 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8r78c" event={"ID":"a04121a4-3ab4-48c5-903e-8d0002771ff7","Type":"ContainerDied","Data":"26c6c01e8de6266c91045a1615b456f02f5eaff630949bebc138b39fd169f580"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.362304 4758 generic.go:334] "Generic (PLEG): container finished" podID="418d35e8-ad4e-4051-a0f1-fc9179300441" containerID="79153cef375b67170413da87462ce99cc5566739d6bdf2f6c1513a6b0ebe4db9" exitCode=0 Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.362375 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vv7kf" event={"ID":"418d35e8-ad4e-4051-a0f1-fc9179300441","Type":"ContainerDied","Data":"79153cef375b67170413da87462ce99cc5566739d6bdf2f6c1513a6b0ebe4db9"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.362413 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vv7kf" event={"ID":"418d35e8-ad4e-4051-a0f1-fc9179300441","Type":"ContainerStarted","Data":"f63a7aacdece4e18ffb5932d48f95957334f667a295b639f15c326a86a477819"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.364242 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cps5h" event={"ID":"88b3ba93-ce04-4374-8fed-db25eb3b3065","Type":"ContainerDied","Data":"8e847bc505b4253fd572f62d30b4950f0e0d038f85122290c011602277ae9076"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.364270 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e847bc505b4253fd572f62d30b4950f0e0d038f85122290c011602277ae9076" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.364350 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cps5h" Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.367207 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72be-account-create-update-gphrs" event={"ID":"2121970a-59f7-4943-ac1e-fa675e5eef8e","Type":"ContainerStarted","Data":"145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5"} Jan 30 08:48:32 crc kubenswrapper[4758]: I0130 08:48:32.398267 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.380010 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72be-account-create-update-gphrs" event={"ID":"2121970a-59f7-4943-ac1e-fa675e5eef8e","Type":"ContainerStarted","Data":"1b0811ae6d53b97ab814b4974446d3583f0018b01ea502fb69edd27ab1156add"} Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.384360 4758 generic.go:334] "Generic (PLEG): container finished" podID="2f506e0d-da4f-4243-923b-f7e102fafd92" containerID="aabb12ec35a8ae64e19c55b39db2dfd8844c3fe734c3d7e0940dfeeffb34c9d1" exitCode=0 Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.384464 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2wfwb" event={"ID":"2f506e0d-da4f-4243-923b-f7e102fafd92","Type":"ContainerDied","Data":"aabb12ec35a8ae64e19c55b39db2dfd8844c3fe734c3d7e0940dfeeffb34c9d1"} Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.386543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50ef-account-create-update-bskhv" event={"ID":"1e9c3b23-6678-42de-923a-27ebb5fb61a3","Type":"ContainerStarted","Data":"daf0e92dedcf6f1f855f17b37c2a9bda8d265d7523eda70616bc5f00569e869a"} Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.386577 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50ef-account-create-update-bskhv" event={"ID":"1e9c3b23-6678-42de-923a-27ebb5fb61a3","Type":"ContainerStarted","Data":"c65be4ec5473b830e36189ebc74b7fa4306eae2315b26780268f290367ce6c90"} Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.469788 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-72be-account-create-update-gphrs" podStartSLOduration=2.46976517 podStartE2EDuration="2.46976517s" podCreationTimestamp="2026-01-30 08:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:33.410542657 +0000 UTC m=+1118.382854208" watchObservedRunningTime="2026-01-30 08:48:33.46976517 +0000 UTC m=+1118.442076721" Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.483568 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-50ef-account-create-update-bskhv" podStartSLOduration=2.4835527170000002 podStartE2EDuration="2.483552717s" podCreationTimestamp="2026-01-30 08:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:33.467596431 +0000 UTC m=+1118.439907982" watchObservedRunningTime="2026-01-30 08:48:33.483552717 +0000 UTC m=+1118.455864268" Jan 30 08:48:33 crc kubenswrapper[4758]: I0130 08:48:33.986373 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.107160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") pod \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.107259 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") pod \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\" (UID: \"585a6b3e-695a-4ddc-91a6-9b39b241ffd0\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.108128 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "585a6b3e-695a-4ddc-91a6-9b39b241ffd0" (UID: "585a6b3e-695a-4ddc-91a6-9b39b241ffd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.113907 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz" (OuterVolumeSpecName: "kube-api-access-jq5wz") pod "585a6b3e-695a-4ddc-91a6-9b39b241ffd0" (UID: "585a6b3e-695a-4ddc-91a6-9b39b241ffd0"). InnerVolumeSpecName "kube-api-access-jq5wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.149571 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.166586 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.198255 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.206838 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cps5h"] Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.209812 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.209845 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq5wz\" (UniqueName: \"kubernetes.io/projected/585a6b3e-695a-4ddc-91a6-9b39b241ffd0-kube-api-access-jq5wz\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311250 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") pod \"418d35e8-ad4e-4051-a0f1-fc9179300441\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311729 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "418d35e8-ad4e-4051-a0f1-fc9179300441" (UID: "418d35e8-ad4e-4051-a0f1-fc9179300441"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311738 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") pod \"418d35e8-ad4e-4051-a0f1-fc9179300441\" (UID: \"418d35e8-ad4e-4051-a0f1-fc9179300441\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311871 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") pod \"a04121a4-3ab4-48c5-903e-8d0002771ff7\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.311948 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") pod \"a04121a4-3ab4-48c5-903e-8d0002771ff7\" (UID: \"a04121a4-3ab4-48c5-903e-8d0002771ff7\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.312489 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/418d35e8-ad4e-4051-a0f1-fc9179300441-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.313209 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a04121a4-3ab4-48c5-903e-8d0002771ff7" (UID: "a04121a4-3ab4-48c5-903e-8d0002771ff7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.316030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l" (OuterVolumeSpecName: "kube-api-access-5wm8l") pod "a04121a4-3ab4-48c5-903e-8d0002771ff7" (UID: "a04121a4-3ab4-48c5-903e-8d0002771ff7"). InnerVolumeSpecName "kube-api-access-5wm8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.316524 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp" (OuterVolumeSpecName: "kube-api-access-cmkwp") pod "418d35e8-ad4e-4051-a0f1-fc9179300441" (UID: "418d35e8-ad4e-4051-a0f1-fc9179300441"). InnerVolumeSpecName "kube-api-access-cmkwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.397875 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d642-account-create-update-zwktw" event={"ID":"585a6b3e-695a-4ddc-91a6-9b39b241ffd0","Type":"ContainerDied","Data":"5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.397915 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eacfb487ac0c827b02d442b7a7c9a36a5772bb34d2fc875864664f17c7f68b6" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.399163 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d642-account-create-update-zwktw" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.399657 4758 generic.go:334] "Generic (PLEG): container finished" podID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" containerID="daf0e92dedcf6f1f855f17b37c2a9bda8d265d7523eda70616bc5f00569e869a" exitCode=0 Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.399700 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50ef-account-create-update-bskhv" event={"ID":"1e9c3b23-6678-42de-923a-27ebb5fb61a3","Type":"ContainerDied","Data":"daf0e92dedcf6f1f855f17b37c2a9bda8d265d7523eda70616bc5f00569e869a"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.401854 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8r78c" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.402447 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8r78c" event={"ID":"a04121a4-3ab4-48c5-903e-8d0002771ff7","Type":"ContainerDied","Data":"e494bb189e2d0ac5e4da8332fef46a9f2687ebe52e965755cf5c04ba2a6d0947"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.402580 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e494bb189e2d0ac5e4da8332fef46a9f2687ebe52e965755cf5c04ba2a6d0947" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.407371 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vv7kf" event={"ID":"418d35e8-ad4e-4051-a0f1-fc9179300441","Type":"ContainerDied","Data":"f63a7aacdece4e18ffb5932d48f95957334f667a295b639f15c326a86a477819"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.407415 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63a7aacdece4e18ffb5932d48f95957334f667a295b639f15c326a86a477819" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.407581 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vv7kf" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.414355 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a04121a4-3ab4-48c5-903e-8d0002771ff7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.414392 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmkwp\" (UniqueName: \"kubernetes.io/projected/418d35e8-ad4e-4051-a0f1-fc9179300441-kube-api-access-cmkwp\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.414407 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wm8l\" (UniqueName: \"kubernetes.io/projected/a04121a4-3ab4-48c5-903e-8d0002771ff7-kube-api-access-5wm8l\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.416601 4758 generic.go:334] "Generic (PLEG): container finished" podID="2121970a-59f7-4943-ac1e-fa675e5eef8e" containerID="1b0811ae6d53b97ab814b4974446d3583f0018b01ea502fb69edd27ab1156add" exitCode=0 Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.416735 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72be-account-create-update-gphrs" event={"ID":"2121970a-59f7-4943-ac1e-fa675e5eef8e","Type":"ContainerDied","Data":"1b0811ae6d53b97ab814b4974446d3583f0018b01ea502fb69edd27ab1156add"} Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.772974 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.941661 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") pod \"2f506e0d-da4f-4243-923b-f7e102fafd92\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.941824 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") pod \"2f506e0d-da4f-4243-923b-f7e102fafd92\" (UID: \"2f506e0d-da4f-4243-923b-f7e102fafd92\") " Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.942245 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f506e0d-da4f-4243-923b-f7e102fafd92" (UID: "2f506e0d-da4f-4243-923b-f7e102fafd92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:34 crc kubenswrapper[4758]: I0130 08:48:34.944858 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx" (OuterVolumeSpecName: "kube-api-access-8wlpx") pod "2f506e0d-da4f-4243-923b-f7e102fafd92" (UID: "2f506e0d-da4f-4243-923b-f7e102fafd92"). InnerVolumeSpecName "kube-api-access-8wlpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.044031 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f506e0d-da4f-4243-923b-f7e102fafd92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.044092 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wlpx\" (UniqueName: \"kubernetes.io/projected/2f506e0d-da4f-4243-923b-f7e102fafd92-kube-api-access-8wlpx\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.427671 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2wfwb" event={"ID":"2f506e0d-da4f-4243-923b-f7e102fafd92","Type":"ContainerDied","Data":"61116589352b40224a96215bd65987be6542324de26a624da86a1ed3cc2be782"} Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.427991 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61116589352b40224a96215bd65987be6542324de26a624da86a1ed3cc2be782" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.429144 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2wfwb" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.779686 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b3ba93-ce04-4374-8fed-db25eb3b3065" path="/var/lib/kubelet/pods/88b3ba93-ce04-4374-8fed-db25eb3b3065/volumes" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.892338 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:35 crc kubenswrapper[4758]: I0130 08:48:35.897750 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.063569 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") pod \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.063639 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") pod \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\" (UID: \"1e9c3b23-6678-42de-923a-27ebb5fb61a3\") " Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.063669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") pod \"2121970a-59f7-4943-ac1e-fa675e5eef8e\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.063807 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") pod \"2121970a-59f7-4943-ac1e-fa675e5eef8e\" (UID: \"2121970a-59f7-4943-ac1e-fa675e5eef8e\") " Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.064522 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2121970a-59f7-4943-ac1e-fa675e5eef8e" (UID: "2121970a-59f7-4943-ac1e-fa675e5eef8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.064523 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e9c3b23-6678-42de-923a-27ebb5fb61a3" (UID: "1e9c3b23-6678-42de-923a-27ebb5fb61a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.068935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt" (OuterVolumeSpecName: "kube-api-access-275pt") pod "1e9c3b23-6678-42de-923a-27ebb5fb61a3" (UID: "1e9c3b23-6678-42de-923a-27ebb5fb61a3"). InnerVolumeSpecName "kube-api-access-275pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.072648 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb" (OuterVolumeSpecName: "kube-api-access-hllsb") pod "2121970a-59f7-4943-ac1e-fa675e5eef8e" (UID: "2121970a-59f7-4943-ac1e-fa675e5eef8e"). InnerVolumeSpecName "kube-api-access-hllsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.165215 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-275pt\" (UniqueName: \"kubernetes.io/projected/1e9c3b23-6678-42de-923a-27ebb5fb61a3-kube-api-access-275pt\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.165251 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hllsb\" (UniqueName: \"kubernetes.io/projected/2121970a-59f7-4943-ac1e-fa675e5eef8e-kube-api-access-hllsb\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.165260 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2121970a-59f7-4943-ac1e-fa675e5eef8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.165269 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9c3b23-6678-42de-923a-27ebb5fb61a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.436984 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72be-account-create-update-gphrs" event={"ID":"2121970a-59f7-4943-ac1e-fa675e5eef8e","Type":"ContainerDied","Data":"145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5"} Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.437310 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145e4e1ba4d4aa1093bcc2768b2faa2eacc47565152f77014a3c9ea2fe4efec5" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.437010 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72be-account-create-update-gphrs" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.439817 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50ef-account-create-update-bskhv" event={"ID":"1e9c3b23-6678-42de-923a-27ebb5fb61a3","Type":"ContainerDied","Data":"c65be4ec5473b830e36189ebc74b7fa4306eae2315b26780268f290367ce6c90"} Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.439867 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65be4ec5473b830e36189ebc74b7fa4306eae2315b26780268f290367ce6c90" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.439882 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50ef-account-create-update-bskhv" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.490027 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bg2b8" podUID="78294966-2fbd-4ed5-8d2a-2096ac07dac1" containerName="ovn-controller" probeResult="failure" output=< Jan 30 08:48:36 crc kubenswrapper[4758]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 08:48:36 crc kubenswrapper[4758]: > Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.590851 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.616548 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9jzfn" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855173 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855512 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d35e8-ad4e-4051-a0f1-fc9179300441" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855524 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d35e8-ad4e-4051-a0f1-fc9179300441" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855540 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04121a4-3ab4-48c5-903e-8d0002771ff7" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855546 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04121a4-3ab4-48c5-903e-8d0002771ff7" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855568 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f506e0d-da4f-4243-923b-f7e102fafd92" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855574 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f506e0d-da4f-4243-923b-f7e102fafd92" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855582 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855587 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855600 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855606 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855618 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b3ba93-ce04-4374-8fed-db25eb3b3065" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855623 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b3ba93-ce04-4374-8fed-db25eb3b3065" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: E0130 08:48:36.855637 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2121970a-59f7-4943-ac1e-fa675e5eef8e" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855642 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2121970a-59f7-4943-ac1e-fa675e5eef8e" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855789 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2121970a-59f7-4943-ac1e-fa675e5eef8e" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855800 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04121a4-3ab4-48c5-903e-8d0002771ff7" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855814 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855821 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f506e0d-da4f-4243-923b-f7e102fafd92" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855829 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855838 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="418d35e8-ad4e-4051-a0f1-fc9179300441" containerName="mariadb-database-create" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.855849 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b3ba93-ce04-4374-8fed-db25eb3b3065" containerName="mariadb-account-create-update" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.856357 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.862138 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.862441 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980535 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980601 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980637 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980660 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980709 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:36 crc kubenswrapper[4758]: I0130 08:48:36.980733 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082346 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082403 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082441 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082463 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082503 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.082519 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.083009 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.083011 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.083011 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.083559 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.085017 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.100856 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") pod \"ovn-controller-bg2b8-config-s84jw\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.190845 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:37 crc kubenswrapper[4758]: I0130 08:48:37.871427 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:38 crc kubenswrapper[4758]: I0130 08:48:38.507418 4758 generic.go:334] "Generic (PLEG): container finished" podID="1d179b42-bac4-40cb-afea-af27020a2b51" containerID="44986833bc4a6d26e1f0961e4b7a2ef317d50d30ae747ee1fe73659d1eb85ff3" exitCode=0 Jan 30 08:48:38 crc kubenswrapper[4758]: I0130 08:48:38.507464 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-s84jw" event={"ID":"1d179b42-bac4-40cb-afea-af27020a2b51","Type":"ContainerDied","Data":"44986833bc4a6d26e1f0961e4b7a2ef317d50d30ae747ee1fe73659d1eb85ff3"} Jan 30 08:48:38 crc kubenswrapper[4758]: I0130 08:48:38.507488 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-s84jw" event={"ID":"1d179b42-bac4-40cb-afea-af27020a2b51","Type":"ContainerStarted","Data":"64b3554d747e64fa29aa5f7830b4267e45eb5c0b76b56d2c4def2902f3ea79e9"} Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.197303 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.198424 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.200093 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.211462 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.247511 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.247569 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.349517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.349580 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.350433 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.374853 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") pod \"root-account-create-update-5vdm5\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.525271 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:39 crc kubenswrapper[4758]: I0130 08:48:39.936467 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060783 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060828 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060883 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060926 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.060959 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") pod \"1d179b42-bac4-40cb-afea-af27020a2b51\" (UID: \"1d179b42-bac4-40cb-afea-af27020a2b51\") " Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062324 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run" (OuterVolumeSpecName: "var-run") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062917 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts" (OuterVolumeSpecName: "scripts") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.062972 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.071420 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8" (OuterVolumeSpecName: "kube-api-access-xxjq8") pod "1d179b42-bac4-40cb-afea-af27020a2b51" (UID: "1d179b42-bac4-40cb-afea-af27020a2b51"). InnerVolumeSpecName "kube-api-access-xxjq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.087886 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162918 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxjq8\" (UniqueName: \"kubernetes.io/projected/1d179b42-bac4-40cb-afea-af27020a2b51-kube-api-access-xxjq8\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162945 4758 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162955 4758 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162976 4758 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162985 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d179b42-bac4-40cb-afea-af27020a2b51-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.162992 4758 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1d179b42-bac4-40cb-afea-af27020a2b51-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.523330 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-s84jw" event={"ID":"1d179b42-bac4-40cb-afea-af27020a2b51","Type":"ContainerDied","Data":"64b3554d747e64fa29aa5f7830b4267e45eb5c0b76b56d2c4def2902f3ea79e9"} Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.523716 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b3554d747e64fa29aa5f7830b4267e45eb5c0b76b56d2c4def2902f3ea79e9" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.523363 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-s84jw" Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.525865 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5vdm5" event={"ID":"78535c40-59db-4a0e-bcb9-4bae7e92548c","Type":"ContainerDied","Data":"defa08629f2c819116b155489e34ad3fab9a107ee7bcfc7941be06048d203e56"} Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.525715 4758 generic.go:334] "Generic (PLEG): container finished" podID="78535c40-59db-4a0e-bcb9-4bae7e92548c" containerID="defa08629f2c819116b155489e34ad3fab9a107ee7bcfc7941be06048d203e56" exitCode=0 Jan 30 08:48:40 crc kubenswrapper[4758]: I0130 08:48:40.526424 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5vdm5" event={"ID":"78535c40-59db-4a0e-bcb9-4bae7e92548c","Type":"ContainerStarted","Data":"4be144c8044df6977860b5c54c90d9e992a7b9f80f15e2e44a617553dddd6d06"} Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.047394 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.066711 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bg2b8-config-s84jw"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.118169 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:41 crc kubenswrapper[4758]: E0130 08:48:41.120789 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d179b42-bac4-40cb-afea-af27020a2b51" containerName="ovn-config" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.120954 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d179b42-bac4-40cb-afea-af27020a2b51" containerName="ovn-config" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.121253 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d179b42-bac4-40cb-afea-af27020a2b51" containerName="ovn-config" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.123006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.127637 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.130490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.195796 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.196652 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.196775 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.196947 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.197237 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.199387 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301358 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301572 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301637 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.301662 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.302230 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.302282 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.302369 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.302626 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.303796 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.321910 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") pod \"ovn-controller-bg2b8-config-v28g5\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.382887 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.384012 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.386806 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.386840 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wpw9v" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.393079 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.403303 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.403402 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.403468 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.403554 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.488476 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.506461 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.506563 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.506617 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.506681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.512055 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.512994 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.533142 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.553599 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") pod \"glance-db-sync-kc8cg\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.704948 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kc8cg" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.875592 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d179b42-bac4-40cb-afea-af27020a2b51" path="/var/lib/kubelet/pods/1d179b42-bac4-40cb-afea-af27020a2b51/volumes" Jan 30 08:48:41 crc kubenswrapper[4758]: I0130 08:48:41.876503 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-bg2b8" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.193843 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.231991 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") pod \"78535c40-59db-4a0e-bcb9-4bae7e92548c\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.232066 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") pod \"78535c40-59db-4a0e-bcb9-4bae7e92548c\" (UID: \"78535c40-59db-4a0e-bcb9-4bae7e92548c\") " Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.234435 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78535c40-59db-4a0e-bcb9-4bae7e92548c" (UID: "78535c40-59db-4a0e-bcb9-4bae7e92548c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.238121 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq" (OuterVolumeSpecName: "kube-api-access-96jjq") pod "78535c40-59db-4a0e-bcb9-4bae7e92548c" (UID: "78535c40-59db-4a0e-bcb9-4bae7e92548c"). InnerVolumeSpecName "kube-api-access-96jjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.333880 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96jjq\" (UniqueName: \"kubernetes.io/projected/78535c40-59db-4a0e-bcb9-4bae7e92548c-kube-api-access-96jjq\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.333923 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78535c40-59db-4a0e-bcb9-4bae7e92548c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.340149 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:42 crc kubenswrapper[4758]: W0130 08:48:42.343119 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7558428a_6aa5_454c_b401_d921918c8239.slice/crio-9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba WatchSource:0}: Error finding container 9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba: Status 404 returned error can't find the container with id 9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.565379 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerID="858a4cc294f2673581b5056b6b3f2795b013fb5990368406beb8a506660b666f" exitCode=0 Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.565653 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerDied","Data":"858a4cc294f2673581b5056b6b3f2795b013fb5990368406beb8a506660b666f"} Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.573772 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5vdm5" event={"ID":"78535c40-59db-4a0e-bcb9-4bae7e92548c","Type":"ContainerDied","Data":"4be144c8044df6977860b5c54c90d9e992a7b9f80f15e2e44a617553dddd6d06"} Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.573815 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be144c8044df6977860b5c54c90d9e992a7b9f80f15e2e44a617553dddd6d06" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.573872 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5vdm5" Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.576002 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:48:42 crc kubenswrapper[4758]: I0130 08:48:42.579744 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-v28g5" event={"ID":"7558428a-6aa5-454c-b401-d921918c8239","Type":"ContainerStarted","Data":"9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba"} Jan 30 08:48:42 crc kubenswrapper[4758]: W0130 08:48:42.586335 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2e127b5_11e2_40a6_8389_a9d08b8cae4f.slice/crio-32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949 WatchSource:0}: Error finding container 32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949: Status 404 returned error can't find the container with id 32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949 Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.591348 4758 generic.go:334] "Generic (PLEG): container finished" podID="7558428a-6aa5-454c-b401-d921918c8239" containerID="86c33d040e9741a4c414c369434d352975d29a3e661c9ee51459ded8965dad1b" exitCode=0 Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.591399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-v28g5" event={"ID":"7558428a-6aa5-454c-b401-d921918c8239","Type":"ContainerDied","Data":"86c33d040e9741a4c414c369434d352975d29a3e661c9ee51459ded8965dad1b"} Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.595699 4758 generic.go:334] "Generic (PLEG): container finished" podID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerID="80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293" exitCode=0 Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.595773 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerDied","Data":"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293"} Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.605416 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kc8cg" event={"ID":"d2e127b5-11e2-40a6-8389-a9d08b8cae4f","Type":"ContainerStarted","Data":"32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949"} Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.612682 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerStarted","Data":"cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9"} Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.614802 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:48:43 crc kubenswrapper[4758]: I0130 08:48:43.692124 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.454834214 podStartE2EDuration="1m17.692097769s" podCreationTimestamp="2026-01-30 08:47:26 +0000 UTC" firstStartedPulling="2026-01-30 08:47:29.110229854 +0000 UTC m=+1054.082541405" lastFinishedPulling="2026-01-30 08:48:09.347493409 +0000 UTC m=+1094.319804960" observedRunningTime="2026-01-30 08:48:43.676901126 +0000 UTC m=+1128.649212687" watchObservedRunningTime="2026-01-30 08:48:43.692097769 +0000 UTC m=+1128.664409320" Jan 30 08:48:44 crc kubenswrapper[4758]: I0130 08:48:44.626323 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerStarted","Data":"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba"} Jan 30 08:48:44 crc kubenswrapper[4758]: I0130 08:48:44.627339 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 08:48:44 crc kubenswrapper[4758]: I0130 08:48:44.647432 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371958.207357 podStartE2EDuration="1m18.647417638s" podCreationTimestamp="2026-01-30 08:47:26 +0000 UTC" firstStartedPulling="2026-01-30 08:47:28.856548099 +0000 UTC m=+1053.828859660" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:48:44.645912421 +0000 UTC m=+1129.618223982" watchObservedRunningTime="2026-01-30 08:48:44.647417638 +0000 UTC m=+1129.619729189" Jan 30 08:48:44 crc kubenswrapper[4758]: I0130 08:48:44.974253 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095349 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095446 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095471 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095546 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095624 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.095658 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") pod \"7558428a-6aa5-454c-b401-d921918c8239\" (UID: \"7558428a-6aa5-454c-b401-d921918c8239\") " Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.096116 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run" (OuterVolumeSpecName: "var-run") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.096821 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.096861 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.096881 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.098467 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts" (OuterVolumeSpecName: "scripts") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.102138 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9" (OuterVolumeSpecName: "kube-api-access-k58d9") pod "7558428a-6aa5-454c-b401-d921918c8239" (UID: "7558428a-6aa5-454c-b401-d921918c8239"). InnerVolumeSpecName "kube-api-access-k58d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198206 4758 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198247 4758 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198262 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k58d9\" (UniqueName: \"kubernetes.io/projected/7558428a-6aa5-454c-b401-d921918c8239-kube-api-access-k58d9\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198274 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198285 4758 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7558428a-6aa5-454c-b401-d921918c8239-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.198297 4758 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7558428a-6aa5-454c-b401-d921918c8239-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.633684 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bg2b8-config-v28g5" event={"ID":"7558428a-6aa5-454c-b401-d921918c8239","Type":"ContainerDied","Data":"9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba"} Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.633750 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0f84ac2c8bdae619adbfe269cb5369aa97309d8e6c0d8dd51820f02b3f24ba" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.633912 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bg2b8-config-v28g5" Jan 30 08:48:45 crc kubenswrapper[4758]: I0130 08:48:45.913755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:48:45 crc kubenswrapper[4758]: E0130 08:48:45.914587 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:48:45 crc kubenswrapper[4758]: E0130 08:48:45.914608 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:48:45 crc kubenswrapper[4758]: E0130 08:48:45.914655 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:49:17.914637901 +0000 UTC m=+1162.886949452 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:48:46 crc kubenswrapper[4758]: I0130 08:48:46.092444 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:46 crc kubenswrapper[4758]: I0130 08:48:46.102212 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bg2b8-config-v28g5"] Jan 30 08:48:47 crc kubenswrapper[4758]: I0130 08:48:47.781911 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7558428a-6aa5-454c-b401-d921918c8239" path="/var/lib/kubelet/pods/7558428a-6aa5-454c-b401-d921918c8239/volumes" Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.387904 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.388251 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.388293 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.388931 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.388974 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2" gracePeriod=600 Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.700810 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2" exitCode=0 Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.700868 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2"} Jan 30 08:48:52 crc kubenswrapper[4758]: I0130 08:48:52.700911 4758 scope.go:117] "RemoveContainer" containerID="69b903b761c0949dbe1210bdef2b095568ae802b78c69042a2686b209bdd29e6" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.109310 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.285279 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.772767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2"} Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902127 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:48:58 crc kubenswrapper[4758]: E0130 08:48:58.902514 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78535c40-59db-4a0e-bcb9-4bae7e92548c" containerName="mariadb-account-create-update" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902534 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78535c40-59db-4a0e-bcb9-4bae7e92548c" containerName="mariadb-account-create-update" Jan 30 08:48:58 crc kubenswrapper[4758]: E0130 08:48:58.902560 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7558428a-6aa5-454c-b401-d921918c8239" containerName="ovn-config" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902570 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7558428a-6aa5-454c-b401-d921918c8239" containerName="ovn-config" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902764 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78535c40-59db-4a0e-bcb9-4bae7e92548c" containerName="mariadb-account-create-update" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.902800 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7558428a-6aa5-454c-b401-d921918c8239" containerName="ovn-config" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.903338 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.974750 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.974810 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:58 crc kubenswrapper[4758]: I0130 08:48:58.982667 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.076856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.076922 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.078261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.106395 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.107554 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.110765 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.131013 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") pod \"cinder-db-create-cgr7c\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.133015 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.178210 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.178301 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.224240 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cgr7c" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.279468 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.279545 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.280406 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.313729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") pod \"cinder-feed-account-create-update-qcmp4\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.386013 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.387223 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.422324 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.423425 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.427896 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.427990 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.448938 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.484000 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.484195 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.484280 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.484321 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.493441 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.589878 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.590200 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.590293 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.590367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.591286 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.591957 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.604356 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.605938 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.636146 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.679600 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") pod \"barbican-495a-account-create-update-k2qtr\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.682484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") pod \"barbican-db-create-bstt2\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.692928 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.693216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.704682 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bstt2" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.749161 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.750707 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.763059 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.763280 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w5f5m" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.766011 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.766216 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.767362 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.799170 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.799929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.799980 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.800021 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.800095 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.800930 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.808284 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.839267 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kc8cg" event={"ID":"d2e127b5-11e2-40a6-8389-a9d08b8cae4f","Type":"ContainerStarted","Data":"2906f8c6c80f27b1b5d6a346bfa1c2bccd0d8111cf2311c1ea9793ee001e6172"} Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.850109 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") pod \"neutron-db-create-2rzkq\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " pod="openstack/neutron-db-create-2rzkq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.862535 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.863968 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.866482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.876675 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.881945 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kc8cg" podStartSLOduration=3.454730715 podStartE2EDuration="18.881924712s" podCreationTimestamp="2026-01-30 08:48:41 +0000 UTC" firstStartedPulling="2026-01-30 08:48:42.58855767 +0000 UTC m=+1127.560869221" lastFinishedPulling="2026-01-30 08:48:58.015751667 +0000 UTC m=+1142.988063218" observedRunningTime="2026-01-30 08:48:59.864695026 +0000 UTC m=+1144.837006577" watchObservedRunningTime="2026-01-30 08:48:59.881924712 +0000 UTC m=+1144.854236263" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905714 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905778 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.905916 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.930751 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.931559 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.945683 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") pod \"keystone-db-sync-6cqnq\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:48:59 crc kubenswrapper[4758]: I0130 08:48:59.945700 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rzkq" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.007089 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.007148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.008415 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.038211 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") pod \"neutron-5131-account-create-update-tbcx5\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.060274 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.170906 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.194188 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.652585 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:49:00 crc kubenswrapper[4758]: W0130 08:49:00.659881 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58afb54a_7b57_42bc_af7c_13db0bfd1580.slice/crio-87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748 WatchSource:0}: Error finding container 87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748: Status 404 returned error can't find the container with id 87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748 Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.688891 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.755408 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:49:00 crc kubenswrapper[4758]: W0130 08:49:00.758527 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a01921f_9f43_471a_bbd3_0a7e9bab364e.slice/crio-75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98 WatchSource:0}: Error finding container 75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98: Status 404 returned error can't find the container with id 75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98 Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.780134 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.902407 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bstt2" event={"ID":"8a01921f-9f43-471a-bbd3-0a7e9bab364e","Type":"ContainerStarted","Data":"75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.926679 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-495a-account-create-update-k2qtr" event={"ID":"f4bedf0c-06c1-4eaa-b731-3b8c2438456d","Type":"ContainerStarted","Data":"98d64788b2e4146b6e0c5cd1d465c9c9a9a919896eb41c7fa3f82de022f57544"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.935289 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rzkq" event={"ID":"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30","Type":"ContainerStarted","Data":"2543f597f41a34e8ce4ecb182dddc5c7e9b80aa1fe865ceffab33db59555554a"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.947857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cgr7c" event={"ID":"6b87c6fc-6491-419a-96bb-9542edb1e8aa","Type":"ContainerStarted","Data":"1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.947902 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cgr7c" event={"ID":"6b87c6fc-6491-419a-96bb-9542edb1e8aa","Type":"ContainerStarted","Data":"6a2b1b56efb4ec4f5e3111fa2055368ffd6b8141c1330555ede25dc0591fb33e"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.954092 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-feed-account-create-update-qcmp4" event={"ID":"58afb54a-7b57-42bc-af7c-13db0bfd1580","Type":"ContainerStarted","Data":"87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748"} Jan 30 08:49:00 crc kubenswrapper[4758]: I0130 08:49:00.988273 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-cgr7c" podStartSLOduration=2.988253791 podStartE2EDuration="2.988253791s" podCreationTimestamp="2026-01-30 08:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:00.987792996 +0000 UTC m=+1145.960104577" watchObservedRunningTime="2026-01-30 08:49:00.988253791 +0000 UTC m=+1145.960565342" Jan 30 08:49:01 crc kubenswrapper[4758]: I0130 08:49:01.058144 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:49:01 crc kubenswrapper[4758]: W0130 08:49:01.090276 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5665f568_1d3e_48dc_8a32_7bd9ad02a037.slice/crio-d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106 WatchSource:0}: Error finding container d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106: Status 404 returned error can't find the container with id d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106 Jan 30 08:49:01 crc kubenswrapper[4758]: I0130 08:49:01.101194 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:49:01 crc kubenswrapper[4758]: W0130 08:49:01.119653 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc85c22cc_99a7_4939_904c_bffa8f2d5457.slice/crio-ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308 WatchSource:0}: Error finding container ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308: Status 404 returned error can't find the container with id ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308 Jan 30 08:49:01 crc kubenswrapper[4758]: E0130 08:49:01.647525 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b87c6fc_6491_419a_96bb_9542edb1e8aa.slice/crio-1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:49:01 crc kubenswrapper[4758]: I0130 08:49:01.976257 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-feed-account-create-update-qcmp4" event={"ID":"58afb54a-7b57-42bc-af7c-13db0bfd1580","Type":"ContainerStarted","Data":"b2ff6cf87c18064183ff2ca83818d1799a9ee6c49eb71a91d7e28d365a9e731e"} Jan 30 08:49:01 crc kubenswrapper[4758]: I0130 08:49:01.989765 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bstt2" event={"ID":"8a01921f-9f43-471a-bbd3-0a7e9bab364e","Type":"ContainerStarted","Data":"0ae912bbbb171da1791fe8eb0cee80e9a7d55f62417065d9e913877b45490451"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.005643 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6cqnq" event={"ID":"5665f568-1d3e-48dc-8a32-7bd9ad02a037","Type":"ContainerStarted","Data":"d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.018857 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rzkq" event={"ID":"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30","Type":"ContainerStarted","Data":"6c4d4c33fc49af87a76aa96b41f58bb62854886d94fd3aadaddede31f4e5c0ed"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.024567 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-feed-account-create-update-qcmp4" podStartSLOduration=3.024534824 podStartE2EDuration="3.024534824s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.008530175 +0000 UTC m=+1146.980841736" watchObservedRunningTime="2026-01-30 08:49:02.024534824 +0000 UTC m=+1146.996846375" Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.026509 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-495a-account-create-update-k2qtr" event={"ID":"f4bedf0c-06c1-4eaa-b731-3b8c2438456d","Type":"ContainerStarted","Data":"de4f25edc76809fd476b4ddf2f6fed1b04afc6e58967e48dd869ed6a37fbb265"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.040407 4758 generic.go:334] "Generic (PLEG): container finished" podID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" containerID="1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20" exitCode=0 Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.040541 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cgr7c" event={"ID":"6b87c6fc-6491-419a-96bb-9542edb1e8aa","Type":"ContainerDied","Data":"1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.045499 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-bstt2" podStartSLOduration=3.045469849 podStartE2EDuration="3.045469849s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.041339738 +0000 UTC m=+1147.013651289" watchObservedRunningTime="2026-01-30 08:49:02.045469849 +0000 UTC m=+1147.017781400" Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.054009 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5131-account-create-update-tbcx5" event={"ID":"c85c22cc-99a7-4939-904c-bffa8f2d5457","Type":"ContainerStarted","Data":"edfcbb4ee1b893b2f19bb3b08ec37186fb229027e460be6e17c73cf2a49baec0"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.054076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5131-account-create-update-tbcx5" event={"ID":"c85c22cc-99a7-4939-904c-bffa8f2d5457","Type":"ContainerStarted","Data":"ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308"} Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.120784 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-495a-account-create-update-k2qtr" podStartSLOduration=3.120758152 podStartE2EDuration="3.120758152s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.078359544 +0000 UTC m=+1147.050671095" watchObservedRunningTime="2026-01-30 08:49:02.120758152 +0000 UTC m=+1147.093069853" Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.163822 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-2rzkq" podStartSLOduration=3.163802819 podStartE2EDuration="3.163802819s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.160611359 +0000 UTC m=+1147.132922910" watchObservedRunningTime="2026-01-30 08:49:02.163802819 +0000 UTC m=+1147.136114370" Jan 30 08:49:02 crc kubenswrapper[4758]: I0130 08:49:02.219108 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5131-account-create-update-tbcx5" podStartSLOduration=3.219074747 podStartE2EDuration="3.219074747s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:02.206708793 +0000 UTC m=+1147.179020364" watchObservedRunningTime="2026-01-30 08:49:02.219074747 +0000 UTC m=+1147.191386308" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.097992 4758 generic.go:334] "Generic (PLEG): container finished" podID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" containerID="6c4d4c33fc49af87a76aa96b41f58bb62854886d94fd3aadaddede31f4e5c0ed" exitCode=0 Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.099746 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rzkq" event={"ID":"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30","Type":"ContainerDied","Data":"6c4d4c33fc49af87a76aa96b41f58bb62854886d94fd3aadaddede31f4e5c0ed"} Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.622611 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cgr7c" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.727606 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") pod \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.727680 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") pod \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\" (UID: \"6b87c6fc-6491-419a-96bb-9542edb1e8aa\") " Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.728665 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b87c6fc-6491-419a-96bb-9542edb1e8aa" (UID: "6b87c6fc-6491-419a-96bb-9542edb1e8aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.736199 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm" (OuterVolumeSpecName: "kube-api-access-bnzdm") pod "6b87c6fc-6491-419a-96bb-9542edb1e8aa" (UID: "6b87c6fc-6491-419a-96bb-9542edb1e8aa"). InnerVolumeSpecName "kube-api-access-bnzdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.830569 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b87c6fc-6491-419a-96bb-9542edb1e8aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:03 crc kubenswrapper[4758]: I0130 08:49:03.830603 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnzdm\" (UniqueName: \"kubernetes.io/projected/6b87c6fc-6491-419a-96bb-9542edb1e8aa-kube-api-access-bnzdm\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.107006 4758 generic.go:334] "Generic (PLEG): container finished" podID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" containerID="de4f25edc76809fd476b4ddf2f6fed1b04afc6e58967e48dd869ed6a37fbb265" exitCode=0 Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.107082 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-495a-account-create-update-k2qtr" event={"ID":"f4bedf0c-06c1-4eaa-b731-3b8c2438456d","Type":"ContainerDied","Data":"de4f25edc76809fd476b4ddf2f6fed1b04afc6e58967e48dd869ed6a37fbb265"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.109279 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cgr7c" event={"ID":"6b87c6fc-6491-419a-96bb-9542edb1e8aa","Type":"ContainerDied","Data":"6a2b1b56efb4ec4f5e3111fa2055368ffd6b8141c1330555ede25dc0591fb33e"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.109304 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2b1b56efb4ec4f5e3111fa2055368ffd6b8141c1330555ede25dc0591fb33e" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.109344 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cgr7c" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.111877 4758 generic.go:334] "Generic (PLEG): container finished" podID="c85c22cc-99a7-4939-904c-bffa8f2d5457" containerID="edfcbb4ee1b893b2f19bb3b08ec37186fb229027e460be6e17c73cf2a49baec0" exitCode=0 Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.111925 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5131-account-create-update-tbcx5" event={"ID":"c85c22cc-99a7-4939-904c-bffa8f2d5457","Type":"ContainerDied","Data":"edfcbb4ee1b893b2f19bb3b08ec37186fb229027e460be6e17c73cf2a49baec0"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.117563 4758 generic.go:334] "Generic (PLEG): container finished" podID="58afb54a-7b57-42bc-af7c-13db0bfd1580" containerID="b2ff6cf87c18064183ff2ca83818d1799a9ee6c49eb71a91d7e28d365a9e731e" exitCode=0 Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.117642 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-feed-account-create-update-qcmp4" event={"ID":"58afb54a-7b57-42bc-af7c-13db0bfd1580","Type":"ContainerDied","Data":"b2ff6cf87c18064183ff2ca83818d1799a9ee6c49eb71a91d7e28d365a9e731e"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.119876 4758 generic.go:334] "Generic (PLEG): container finished" podID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" containerID="0ae912bbbb171da1791fe8eb0cee80e9a7d55f62417065d9e913877b45490451" exitCode=0 Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.119940 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bstt2" event={"ID":"8a01921f-9f43-471a-bbd3-0a7e9bab364e","Type":"ContainerDied","Data":"0ae912bbbb171da1791fe8eb0cee80e9a7d55f62417065d9e913877b45490451"} Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.564263 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rzkq" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.646291 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") pod \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.646363 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") pod \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\" (UID: \"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30\") " Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.647551 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" (UID: "dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.658663 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq" (OuterVolumeSpecName: "kube-api-access-g58pq") pod "dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" (UID: "dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30"). InnerVolumeSpecName "kube-api-access-g58pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.749001 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g58pq\" (UniqueName: \"kubernetes.io/projected/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-kube-api-access-g58pq\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:04 crc kubenswrapper[4758]: I0130 08:49:04.749062 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:05 crc kubenswrapper[4758]: I0130 08:49:05.140363 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rzkq" Jan 30 08:49:05 crc kubenswrapper[4758]: I0130 08:49:05.145701 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rzkq" event={"ID":"dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30","Type":"ContainerDied","Data":"2543f597f41a34e8ce4ecb182dddc5c7e9b80aa1fe865ceffab33db59555554a"} Jan 30 08:49:05 crc kubenswrapper[4758]: I0130 08:49:05.145740 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2543f597f41a34e8ce4ecb182dddc5c7e9b80aa1fe865ceffab33db59555554a" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.548548 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.570388 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bstt2" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.576107 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.581615 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646256 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") pod \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646374 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") pod \"58afb54a-7b57-42bc-af7c-13db0bfd1580\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646416 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") pod \"58afb54a-7b57-42bc-af7c-13db0bfd1580\" (UID: \"58afb54a-7b57-42bc-af7c-13db0bfd1580\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646546 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") pod \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646602 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") pod \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\" (UID: \"f4bedf0c-06c1-4eaa-b731-3b8c2438456d\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646652 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") pod \"c85c22cc-99a7-4939-904c-bffa8f2d5457\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646723 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") pod \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\" (UID: \"8a01921f-9f43-471a-bbd3-0a7e9bab364e\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.646741 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") pod \"c85c22cc-99a7-4939-904c-bffa8f2d5457\" (UID: \"c85c22cc-99a7-4939-904c-bffa8f2d5457\") " Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647143 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58afb54a-7b57-42bc-af7c-13db0bfd1580" (UID: "58afb54a-7b57-42bc-af7c-13db0bfd1580"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647281 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c85c22cc-99a7-4939-904c-bffa8f2d5457" (UID: "c85c22cc-99a7-4939-904c-bffa8f2d5457"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647153 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a01921f-9f43-471a-bbd3-0a7e9bab364e" (UID: "8a01921f-9f43-471a-bbd3-0a7e9bab364e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647427 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4bedf0c-06c1-4eaa-b731-3b8c2438456d" (UID: "f4bedf0c-06c1-4eaa-b731-3b8c2438456d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647620 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647642 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c22cc-99a7-4939-904c-bffa8f2d5457-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647653 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58afb54a-7b57-42bc-af7c-13db0bfd1580-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.647663 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a01921f-9f43-471a-bbd3-0a7e9bab364e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.653264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl" (OuterVolumeSpecName: "kube-api-access-zz2tl") pod "8a01921f-9f43-471a-bbd3-0a7e9bab364e" (UID: "8a01921f-9f43-471a-bbd3-0a7e9bab364e"). InnerVolumeSpecName "kube-api-access-zz2tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.653760 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72" (OuterVolumeSpecName: "kube-api-access-kqh72") pod "58afb54a-7b57-42bc-af7c-13db0bfd1580" (UID: "58afb54a-7b57-42bc-af7c-13db0bfd1580"). InnerVolumeSpecName "kube-api-access-kqh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.653879 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr" (OuterVolumeSpecName: "kube-api-access-2lzfr") pod "f4bedf0c-06c1-4eaa-b731-3b8c2438456d" (UID: "f4bedf0c-06c1-4eaa-b731-3b8c2438456d"). InnerVolumeSpecName "kube-api-access-2lzfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.654221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh" (OuterVolumeSpecName: "kube-api-access-2pdmh") pod "c85c22cc-99a7-4939-904c-bffa8f2d5457" (UID: "c85c22cc-99a7-4939-904c-bffa8f2d5457"). InnerVolumeSpecName "kube-api-access-2pdmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.749009 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz2tl\" (UniqueName: \"kubernetes.io/projected/8a01921f-9f43-471a-bbd3-0a7e9bab364e-kube-api-access-zz2tl\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.749064 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pdmh\" (UniqueName: \"kubernetes.io/projected/c85c22cc-99a7-4939-904c-bffa8f2d5457-kube-api-access-2pdmh\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.749079 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lzfr\" (UniqueName: \"kubernetes.io/projected/f4bedf0c-06c1-4eaa-b731-3b8c2438456d-kube-api-access-2lzfr\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:08 crc kubenswrapper[4758]: I0130 08:49:08.749091 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqh72\" (UniqueName: \"kubernetes.io/projected/58afb54a-7b57-42bc-af7c-13db0bfd1580-kube-api-access-kqh72\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.178236 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-495a-account-create-update-k2qtr" event={"ID":"f4bedf0c-06c1-4eaa-b731-3b8c2438456d","Type":"ContainerDied","Data":"98d64788b2e4146b6e0c5cd1d465c9c9a9a919896eb41c7fa3f82de022f57544"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.178266 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-495a-account-create-update-k2qtr" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.178699 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d64788b2e4146b6e0c5cd1d465c9c9a9a919896eb41c7fa3f82de022f57544" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.179556 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5131-account-create-update-tbcx5" event={"ID":"c85c22cc-99a7-4939-904c-bffa8f2d5457","Type":"ContainerDied","Data":"ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.179583 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce0f4a94b7cc04e1a5dc3729a5db1288e56a94e7c845700c39031a3f59dae308" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.179588 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5131-account-create-update-tbcx5" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.181135 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-feed-account-create-update-qcmp4" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.181135 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-feed-account-create-update-qcmp4" event={"ID":"58afb54a-7b57-42bc-af7c-13db0bfd1580","Type":"ContainerDied","Data":"87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.181349 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87955bc73e5d7963ec43cad569a0fb06119ece1b4213dd7b9ab6c6c4f82ec748" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.182617 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bstt2" event={"ID":"8a01921f-9f43-471a-bbd3-0a7e9bab364e","Type":"ContainerDied","Data":"75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.182646 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75efd4936693bd61c471db90ec8c484ef3e49142a4aff746beba6cd676272f98" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.182707 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bstt2" Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.189897 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6cqnq" event={"ID":"5665f568-1d3e-48dc-8a32-7bd9ad02a037","Type":"ContainerStarted","Data":"1513688fede19d670ec6826188636f5ccd6bae62325cf06859976dc36b250613"} Jan 30 08:49:09 crc kubenswrapper[4758]: I0130 08:49:09.212337 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6cqnq" podStartSLOduration=2.937211729 podStartE2EDuration="10.212314897s" podCreationTimestamp="2026-01-30 08:48:59 +0000 UTC" firstStartedPulling="2026-01-30 08:49:01.098463044 +0000 UTC m=+1146.070774595" lastFinishedPulling="2026-01-30 08:49:08.373566212 +0000 UTC m=+1153.345877763" observedRunningTime="2026-01-30 08:49:09.211831892 +0000 UTC m=+1154.184143443" watchObservedRunningTime="2026-01-30 08:49:09.212314897 +0000 UTC m=+1154.184626448" Jan 30 08:49:10 crc kubenswrapper[4758]: I0130 08:49:10.199358 4758 generic.go:334] "Generic (PLEG): container finished" podID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" containerID="2906f8c6c80f27b1b5d6a346bfa1c2bccd0d8111cf2311c1ea9793ee001e6172" exitCode=0 Jan 30 08:49:10 crc kubenswrapper[4758]: I0130 08:49:10.199381 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kc8cg" event={"ID":"d2e127b5-11e2-40a6-8389-a9d08b8cae4f","Type":"ContainerDied","Data":"2906f8c6c80f27b1b5d6a346bfa1c2bccd0d8111cf2311c1ea9793ee001e6172"} Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.579009 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kc8cg" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.595223 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") pod \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.595453 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") pod \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.595507 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") pod \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.595574 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") pod \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\" (UID: \"d2e127b5-11e2-40a6-8389-a9d08b8cae4f\") " Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.611975 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d2e127b5-11e2-40a6-8389-a9d08b8cae4f" (UID: "d2e127b5-11e2-40a6-8389-a9d08b8cae4f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.621387 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4" (OuterVolumeSpecName: "kube-api-access-g7nc4") pod "d2e127b5-11e2-40a6-8389-a9d08b8cae4f" (UID: "d2e127b5-11e2-40a6-8389-a9d08b8cae4f"). InnerVolumeSpecName "kube-api-access-g7nc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.639297 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2e127b5-11e2-40a6-8389-a9d08b8cae4f" (UID: "d2e127b5-11e2-40a6-8389-a9d08b8cae4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.648877 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data" (OuterVolumeSpecName: "config-data") pod "d2e127b5-11e2-40a6-8389-a9d08b8cae4f" (UID: "d2e127b5-11e2-40a6-8389-a9d08b8cae4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.697372 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.697608 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7nc4\" (UniqueName: \"kubernetes.io/projected/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-kube-api-access-g7nc4\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.697690 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:11 crc kubenswrapper[4758]: I0130 08:49:11.697748 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e127b5-11e2-40a6-8389-a9d08b8cae4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.218506 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kc8cg" event={"ID":"d2e127b5-11e2-40a6-8389-a9d08b8cae4f","Type":"ContainerDied","Data":"32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949"} Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.219081 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a7ffa868b27537045370b7000a548e24a7b459460ee6419c99353ed8b5d949" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.218715 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kc8cg" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.648489 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649105 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85c22cc-99a7-4939-904c-bffa8f2d5457" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649116 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85c22cc-99a7-4939-904c-bffa8f2d5457" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649136 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649142 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649156 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649162 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649169 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58afb54a-7b57-42bc-af7c-13db0bfd1580" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649176 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="58afb54a-7b57-42bc-af7c-13db0bfd1580" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649186 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649192 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649199 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" containerName="glance-db-sync" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649206 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" containerName="glance-db-sync" Jan 30 08:49:12 crc kubenswrapper[4758]: E0130 08:49:12.649216 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649222 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649358 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649366 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649378 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" containerName="mariadb-database-create" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649383 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649397 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="58afb54a-7b57-42bc-af7c-13db0bfd1580" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649405 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" containerName="glance-db-sync" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.649415 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85c22cc-99a7-4939-904c-bffa8f2d5457" containerName="mariadb-account-create-update" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.650150 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.674590 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.714808 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.714862 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.714947 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.715103 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.715247 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.816780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.816852 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.816923 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.816985 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.817021 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.818001 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.818152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.818325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.818575 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.852491 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") pod \"dnsmasq-dns-74dc88fc-fj5s7\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:12 crc kubenswrapper[4758]: I0130 08:49:12.969450 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:13 crc kubenswrapper[4758]: I0130 08:49:13.296282 4758 generic.go:334] "Generic (PLEG): container finished" podID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" containerID="1513688fede19d670ec6826188636f5ccd6bae62325cf06859976dc36b250613" exitCode=0 Jan 30 08:49:13 crc kubenswrapper[4758]: I0130 08:49:13.297601 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6cqnq" event={"ID":"5665f568-1d3e-48dc-8a32-7bd9ad02a037","Type":"ContainerDied","Data":"1513688fede19d670ec6826188636f5ccd6bae62325cf06859976dc36b250613"} Jan 30 08:49:13 crc kubenswrapper[4758]: I0130 08:49:13.544026 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.305813 4758 generic.go:334] "Generic (PLEG): container finished" podID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerID="b9fc2b9b944e1e445bbffd968c0e5c3fd8f7cf9a43829966a1d32764c98c77b9" exitCode=0 Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.305927 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerDied","Data":"b9fc2b9b944e1e445bbffd968c0e5c3fd8f7cf9a43829966a1d32764c98c77b9"} Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.307160 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerStarted","Data":"a90ebfb23c1318d1d27a1d94512248bbaa3e37a016b8932a8f0ed5677213f22c"} Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.637285 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.676380 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") pod \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.676534 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") pod \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.676647 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") pod \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\" (UID: \"5665f568-1d3e-48dc-8a32-7bd9ad02a037\") " Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.690233 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx" (OuterVolumeSpecName: "kube-api-access-gmnbx") pod "5665f568-1d3e-48dc-8a32-7bd9ad02a037" (UID: "5665f568-1d3e-48dc-8a32-7bd9ad02a037"). InnerVolumeSpecName "kube-api-access-gmnbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.716519 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5665f568-1d3e-48dc-8a32-7bd9ad02a037" (UID: "5665f568-1d3e-48dc-8a32-7bd9ad02a037"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.722843 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data" (OuterVolumeSpecName: "config-data") pod "5665f568-1d3e-48dc-8a32-7bd9ad02a037" (UID: "5665f568-1d3e-48dc-8a32-7bd9ad02a037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.779238 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmnbx\" (UniqueName: \"kubernetes.io/projected/5665f568-1d3e-48dc-8a32-7bd9ad02a037-kube-api-access-gmnbx\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.779279 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:14 crc kubenswrapper[4758]: I0130 08:49:14.779290 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5665f568-1d3e-48dc-8a32-7bd9ad02a037-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.316553 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6cqnq" event={"ID":"5665f568-1d3e-48dc-8a32-7bd9ad02a037","Type":"ContainerDied","Data":"d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106"} Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.317682 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1f4dcf77cdd5aea59016fd8daa905932b1869616b357e10e2c333b4571e8106" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.316578 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6cqnq" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.318431 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerStarted","Data":"0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215"} Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.318648 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.338345 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" podStartSLOduration=3.338323347 podStartE2EDuration="3.338323347s" podCreationTimestamp="2026-01-30 08:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:15.337936106 +0000 UTC m=+1160.310247667" watchObservedRunningTime="2026-01-30 08:49:15.338323347 +0000 UTC m=+1160.310634898" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.546719 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.585173 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:15 crc kubenswrapper[4758]: E0130 08:49:15.585572 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" containerName="keystone-db-sync" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.585591 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" containerName="keystone-db-sync" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.585753 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" containerName="keystone-db-sync" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.586622 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.616365 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.617395 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.619961 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.620113 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w5f5m" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.620948 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.621050 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.621373 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.631357 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.651444 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694193 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694264 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694341 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694374 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694403 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694451 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694527 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694551 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694575 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.694605 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795691 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795725 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795749 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795768 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795800 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795822 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795843 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795865 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795888 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.795918 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.797263 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.798824 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.799538 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.800214 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.805100 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.810879 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.810990 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.811438 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.824595 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.839864 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") pod \"keystone-bootstrap-8tkhn\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.866805 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") pod \"dnsmasq-dns-7d5679f497-dxjh5\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.911448 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.945709 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.947115 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.955433 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pzl98" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.955485 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.955444 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.955677 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 08:49:15 crc kubenswrapper[4758]: I0130 08:49:15.976432 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.014871 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105122 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105179 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105201 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105225 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105294 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.105373 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.106589 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.113632 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.124409 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-89x58" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.128270 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.148084 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209425 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209499 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209537 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209599 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209641 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209670 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209797 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209824 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209879 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.209906 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.211786 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.214667 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.218082 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.248003 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.248147 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.264782 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.274672 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.275241 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.275471 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tdg2m" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.290720 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") pod \"horizon-5755df7977-7khvs\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311028 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311092 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311123 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311195 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311250 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311272 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311326 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311361 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311381 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.311691 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.319970 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.330651 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.333604 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.337925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.340725 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.375119 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.405152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") pod \"cinder-db-sync-x2v6d\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.411433 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.412843 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.412980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413146 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413450 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413540 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.413169 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.421189 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.440908 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.441507 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.457152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") pod \"placement-db-sync-lqbmm\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.459126 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.460946 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.462883 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4bspq" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.463132 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.493902 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.505915 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514265 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514330 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514400 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514460 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514540 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.514603 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.524737 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.539357 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.544788 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.545419 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.565357 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.566770 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.568864 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.594059 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.597782 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.598922 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.605650 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.605866 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.605977 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-t9t64" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616061 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616125 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616156 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616213 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616265 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616302 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616334 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.616361 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.617645 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.618282 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.618803 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.619963 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.630250 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lqbmm" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.631938 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.643907 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.651305 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.710999 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") pod \"barbican-db-sync-c24lg\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.724583 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") pod \"dnsmasq-dns-56798b757f-7trbn\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.725901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.727453 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.727498 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.741761 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.741882 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.741909 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742026 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742193 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742268 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742295 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742324 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742397 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742464 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742487 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.742517 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.760672 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.799765 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.801743 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c24lg" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845520 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845577 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845610 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845634 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845670 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845714 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845735 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845756 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845802 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845845 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845873 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845899 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845943 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.845965 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.846004 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.847213 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.847576 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.851627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.854554 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.857109 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.861612 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.861920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.907205 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.907707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.910465 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.918763 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.926929 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.922572 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.923410 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") pod \"horizon-57679b99fc-55gj9\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.923641 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.930871 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.931331 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.921765 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") pod \"neutron-db-sync-z5hrj\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.952641 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.953394 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.953705 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wpw9v" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.953784 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:49:16 crc kubenswrapper[4758]: I0130 08:49:16.964915 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " pod="openstack/ceilometer-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.014860 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.045440 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.046946 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.050782 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.051105 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.064827 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.066829 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.066887 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.066934 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.066974 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.067064 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.067112 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.067144 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.067175 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.112726 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.185959 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186180 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186655 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186736 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186795 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186835 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186860 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186936 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.186955 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187050 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187082 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187149 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187210 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187260 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187363 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.187966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.194974 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.208310 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.210564 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.220204 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.268188 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.270161 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.271630 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333210 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333454 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333480 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333558 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.333672 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.340775 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.341409 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.342726 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.343007 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.349118 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.361168 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.364254 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.368364 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.369014 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.371609 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.378910 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.437311 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="dnsmasq-dns" containerID="cri-o://0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215" gracePeriod=10 Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.490767 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.499222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.509000 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.526017 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.549092 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.746967 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:17 crc kubenswrapper[4758]: W0130 08:49:17.830555 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c2a333_014d_4c26_b459_fd88537d21ad.slice/crio-5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8 WatchSource:0}: Error finding container 5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8: Status 404 returned error can't find the container with id 5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8 Jan 30 08:49:17 crc kubenswrapper[4758]: I0130 08:49:17.984977 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:49:17 crc kubenswrapper[4758]: E0130 08:49:17.985320 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:49:17 crc kubenswrapper[4758]: E0130 08:49:17.985337 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:49:17 crc kubenswrapper[4758]: E0130 08:49:17.985381 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:50:21.985364739 +0000 UTC m=+1226.957676291 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.064778 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.198637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.400741 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.435468 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lqbmm" event={"ID":"11f7c236-867a-465b-9514-de6a765b312b","Type":"ContainerStarted","Data":"846fcd3c2e04f783d1345c62f282c95a03f7f69db8ece0d61ccbe690d3fc4153"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.449065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8tkhn" event={"ID":"f98ba341-0349-4a6f-ae1d-49f5a794d9c9","Type":"ContainerStarted","Data":"bfb23d59a6ccdaa35ef6bbd15a521f36a60dd5c317e87318f0ba3a0e70190244"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.450105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" event={"ID":"70857a89-e946-4f1d-b19b-fbbd9445de0f","Type":"ContainerStarted","Data":"6e06838e5e1b2a6a9c24419729a2a4691e8bf4541d5b3b48a3967469fe7008e7"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.465825 4758 generic.go:334] "Generic (PLEG): container finished" podID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerID="0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215" exitCode=0 Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.465899 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerDied","Data":"0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.467843 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerStarted","Data":"21f1c5561b1c8dc8979b30b5fd4e9ebe5b905ba1454066061809e405ff87c8bc"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.469484 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5755df7977-7khvs" event={"ID":"57c2a333-014d-4c26-b459-fd88537d21ad","Type":"ContainerStarted","Data":"5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8"} Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.616807 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.711287 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 08:49:18 crc kubenswrapper[4758]: I0130 08:49:18.828955 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:49:19 crc kubenswrapper[4758]: I0130 08:49:19.010271 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:49:19 crc kubenswrapper[4758]: I0130 08:49:19.096563 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:19 crc kubenswrapper[4758]: I0130 08:49:19.995432 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.196246 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.297499 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.315796 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.320431 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357778 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357826 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357877 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357950 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.357976 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.376170 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459138 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459253 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.459376 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.460357 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.461646 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.461898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.475798 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.490430 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.490735 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.501925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") pod \"horizon-596b6b9c4f-tqm7h\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:20 crc kubenswrapper[4758]: I0130 08:49:20.646006 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:21 crc kubenswrapper[4758]: W0130 08:49:21.241527 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12ff21aa_edae_4f56_a2ea_be0deb2d84d7.slice/crio-56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2 WatchSource:0}: Error finding container 56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2: Status 404 returned error can't find the container with id 56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2 Jan 30 08:49:21 crc kubenswrapper[4758]: W0130 08:49:21.245518 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a53958b_ee42_4dea_af5a_086e825f672e.slice/crio-8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9 WatchSource:0}: Error finding container 8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9: Status 404 returned error can't find the container with id 8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9 Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.407329 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489410 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489511 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489623 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489740 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.489829 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.534776 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5hrj" event={"ID":"b166e095-ba6b-443f-8c0a-0e83bb698ccd","Type":"ContainerStarted","Data":"f883e7737939e0c54e462c2bd84b323261d5728c320e964b9fa1343f84ca779a"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.541765 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7" (OuterVolumeSpecName: "kube-api-access-stwf7") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "kube-api-access-stwf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.547128 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57679b99fc-55gj9" event={"ID":"6a53958b-ee42-4dea-af5a-086e825f672e","Type":"ContainerStarted","Data":"8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.552935 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x2v6d" event={"ID":"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3","Type":"ContainerStarted","Data":"3de4a4bfe0a1b2d053131f232fdec07f1fa0eaa08093b340deb9c2fbfcbc2d4c"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.559712 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerStarted","Data":"e0a1dba81e1c51abf7fa44611674e5a569a817032b5b33cdc56d53bd0efa011b"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.565792 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerStarted","Data":"3f904bbcb38b17c79b221e571054b48c5a966635c0e2f3068714106112c9f841"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.567219 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"c0607f79147011ac217f4da35286aea41ac3abd388dd413c6012b3dbc3df6b9a"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.593709 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config" (OuterVolumeSpecName: "config") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.593940 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") pod \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\" (UID: \"d114ad35-e2cf-4ff7-8dfc-d747a159de5d\") " Jan 30 08:49:21 crc kubenswrapper[4758]: W0130 08:49:21.594235 4758 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d114ad35-e2cf-4ff7-8dfc-d747a159de5d/volumes/kubernetes.io~configmap/config Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594249 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config" (OuterVolumeSpecName: "config") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594389 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594551 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594575 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stwf7\" (UniqueName: \"kubernetes.io/projected/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-kube-api-access-stwf7\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594601 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-fj5s7" event={"ID":"d114ad35-e2cf-4ff7-8dfc-d747a159de5d","Type":"ContainerDied","Data":"a90ebfb23c1318d1d27a1d94512248bbaa3e37a016b8932a8f0ed5677213f22c"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.594648 4758 scope.go:117] "RemoveContainer" containerID="0907523829ec21a592ed73a39e4715a74dfc754e114890f00161aa67a23fe215" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.596986 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.598399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c24lg" event={"ID":"12ff21aa-edae-4f56-a2ea-be0deb2d84d7","Type":"ContainerStarted","Data":"56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2"} Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.611056 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.641183 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d114ad35-e2cf-4ff7-8dfc-d747a159de5d" (UID: "d114ad35-e2cf-4ff7-8dfc-d747a159de5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.698689 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.698715 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.698723 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d114ad35-e2cf-4ff7-8dfc-d747a159de5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.725622 4758 scope.go:117] "RemoveContainer" containerID="b9fc2b9b944e1e445bbffd968c0e5c3fd8f7cf9a43829966a1d32764c98c77b9" Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.934936 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:21 crc kubenswrapper[4758]: I0130 08:49:21.947897 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-fj5s7"] Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.124704 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:22 crc kubenswrapper[4758]: E0130 08:49:22.130503 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70857a89_e946_4f1d_b19b_fbbd9445de0f.slice/crio-a097fef8a7ede1eeca098d525fa34f182a088c8dafe767edbe11e6ba96d80393.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f95b5bb_621a_4c74_bc39_a5aea5ef4a06.slice/crio-b3ea3c454168822443d7f32c74a100f615de44f286974f28516fa17d776a4519.scope\": RecentStats: unable to find data in memory cache]" Jan 30 08:49:22 crc kubenswrapper[4758]: W0130 08:49:22.173435 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1a59ac3_5eae_4e76_a7b0_5e3d4395515c.slice/crio-4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa WatchSource:0}: Error finding container 4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa: Status 404 returned error can't find the container with id 4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.635168 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5hrj" event={"ID":"b166e095-ba6b-443f-8c0a-0e83bb698ccd","Type":"ContainerStarted","Data":"a887d904bf88f7531d24fd0632b3980599f13f39af4a57d591d1cab59676a5bb"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.639773 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596b6b9c4f-tqm7h" event={"ID":"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c","Type":"ContainerStarted","Data":"4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.643395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8tkhn" event={"ID":"f98ba341-0349-4a6f-ae1d-49f5a794d9c9","Type":"ContainerStarted","Data":"2c3a333cae2d6b2084a8ea4de5cb47b1fe487040458de9660811e92c533cb616"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.656335 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-z5hrj" podStartSLOduration=6.65631464 podStartE2EDuration="6.65631464s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:22.655568806 +0000 UTC m=+1167.627880357" watchObservedRunningTime="2026-01-30 08:49:22.65631464 +0000 UTC m=+1167.628626191" Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.662142 4758 generic.go:334] "Generic (PLEG): container finished" podID="70857a89-e946-4f1d-b19b-fbbd9445de0f" containerID="a097fef8a7ede1eeca098d525fa34f182a088c8dafe767edbe11e6ba96d80393" exitCode=0 Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.662356 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" event={"ID":"70857a89-e946-4f1d-b19b-fbbd9445de0f","Type":"ContainerDied","Data":"a097fef8a7ede1eeca098d525fa34f182a088c8dafe767edbe11e6ba96d80393"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.672675 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerStarted","Data":"3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.682675 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8tkhn" podStartSLOduration=7.682661167 podStartE2EDuration="7.682661167s" podCreationTimestamp="2026-01-30 08:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:22.682382888 +0000 UTC m=+1167.654694449" watchObservedRunningTime="2026-01-30 08:49:22.682661167 +0000 UTC m=+1167.654972718" Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.686772 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerStarted","Data":"b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864"} Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.692924 4758 generic.go:334] "Generic (PLEG): container finished" podID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerID="b3ea3c454168822443d7f32c74a100f615de44f286974f28516fa17d776a4519" exitCode=0 Jan 30 08:49:22 crc kubenswrapper[4758]: I0130 08:49:22.693009 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerDied","Data":"b3ea3c454168822443d7f32c74a100f615de44f286974f28516fa17d776a4519"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.250325 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350633 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350689 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350750 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.350867 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") pod \"70857a89-e946-4f1d-b19b-fbbd9445de0f\" (UID: \"70857a89-e946-4f1d-b19b-fbbd9445de0f\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.356113 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n" (OuterVolumeSpecName: "kube-api-access-j595n") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "kube-api-access-j595n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.411018 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.415763 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.417361 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.451199 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config" (OuterVolumeSpecName: "config") pod "70857a89-e946-4f1d-b19b-fbbd9445de0f" (UID: "70857a89-e946-4f1d-b19b-fbbd9445de0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452537 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452826 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452836 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j595n\" (UniqueName: \"kubernetes.io/projected/70857a89-e946-4f1d-b19b-fbbd9445de0f-kube-api-access-j595n\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452845 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.452853 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70857a89-e946-4f1d-b19b-fbbd9445de0f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.726508 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerStarted","Data":"f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.726661 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.730096 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" event={"ID":"70857a89-e946-4f1d-b19b-fbbd9445de0f","Type":"ContainerDied","Data":"6e06838e5e1b2a6a9c24419729a2a4691e8bf4541d5b3b48a3967469fe7008e7"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.730133 4758 scope.go:117] "RemoveContainer" containerID="a097fef8a7ede1eeca098d525fa34f182a088c8dafe767edbe11e6ba96d80393" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.730254 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5679f497-dxjh5" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.736347 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerStarted","Data":"7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.748004 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56798b757f-7trbn" podStartSLOduration=7.747984983 podStartE2EDuration="7.747984983s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:23.745008538 +0000 UTC m=+1168.717320099" watchObservedRunningTime="2026-01-30 08:49:23.747984983 +0000 UTC m=+1168.720296534" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.798819 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" path="/var/lib/kubelet/pods/d114ad35-e2cf-4ff7-8dfc-d747a159de5d/volumes" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.814284 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:23.826166 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5679f497-dxjh5"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:24.760149 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerStarted","Data":"0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:24.760566 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-log" containerID="cri-o://b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864" gracePeriod=30 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:24.760636 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-httpd" containerID="cri-o://7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab" gracePeriod=30 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:24.785664 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.785642748 podStartE2EDuration="8.785642748s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:24.782381395 +0000 UTC m=+1169.754692956" watchObservedRunningTime="2026-01-30 08:49:24.785642748 +0000 UTC m=+1169.757954309" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.482433 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.533809 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:49:26 crc kubenswrapper[4758]: E0130 08:49:25.538794 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="dnsmasq-dns" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.538824 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="dnsmasq-dns" Jan 30 08:49:26 crc kubenswrapper[4758]: E0130 08:49:25.538860 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.538868 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: E0130 08:49:25.538889 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70857a89-e946-4f1d-b19b-fbbd9445de0f" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.538897 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="70857a89-e946-4f1d-b19b-fbbd9445de0f" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.542235 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d114ad35-e2cf-4ff7-8dfc-d747a159de5d" containerName="dnsmasq-dns" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.542275 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="70857a89-e946-4f1d-b19b-fbbd9445de0f" containerName="init" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.543680 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.548647 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.554586 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609215 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609349 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609406 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609482 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609530 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609558 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.609632 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.647820 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.682768 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cf698bb7b-gp87v"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.684558 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.708986 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf698bb7b-gp87v"] Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.710887 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.710953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.710987 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-scripts\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711019 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711054 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-config-data\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711073 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711119 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-secret-key\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711158 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711177 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-tls-certs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711194 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97906db2-3b2d-44ec-af77-d3edf75b7f76-logs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711218 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8m5m\" (UniqueName: \"kubernetes.io/projected/97906db2-3b2d-44ec-af77-d3edf75b7f76-kube-api-access-x8m5m\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711249 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711282 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-combined-ca-bundle\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.711318 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.713945 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.714354 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.715003 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.718729 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.721560 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.730868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.752865 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") pod \"horizon-76fc974bd8-4mnvj\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813276 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70857a89-e946-4f1d-b19b-fbbd9445de0f" path="/var/lib/kubelet/pods/70857a89-e946-4f1d-b19b-fbbd9445de0f/volumes" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-scripts\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813503 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-config-data\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813576 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-secret-key\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-tls-certs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813647 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97906db2-3b2d-44ec-af77-d3edf75b7f76-logs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813684 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8m5m\" (UniqueName: \"kubernetes.io/projected/97906db2-3b2d-44ec-af77-d3edf75b7f76-kube-api-access-x8m5m\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.813723 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-combined-ca-bundle\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.816630 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-scripts\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.816905 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97906db2-3b2d-44ec-af77-d3edf75b7f76-logs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.821129 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-secret-key\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.827211 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97906db2-3b2d-44ec-af77-d3edf75b7f76-config-data\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.834872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-horizon-tls-certs\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.848955 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97906db2-3b2d-44ec-af77-d3edf75b7f76-combined-ca-bundle\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.851518 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerDied","Data":"7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.858914 4758 generic.go:334] "Generic (PLEG): container finished" podID="05ff1946-6081-48fc-9474-e434068abc50" containerID="7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab" exitCode=0 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.858973 4758 generic.go:334] "Generic (PLEG): container finished" podID="05ff1946-6081-48fc-9474-e434068abc50" containerID="b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864" exitCode=143 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.859201 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-log" containerID="cri-o://3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c" gracePeriod=30 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.859345 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerDied","Data":"b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.859421 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-httpd" containerID="cri-o://0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083" gracePeriod=30 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:25.874961 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8m5m\" (UniqueName: \"kubernetes.io/projected/97906db2-3b2d-44ec-af77-d3edf75b7f76-kube-api-access-x8m5m\") pod \"horizon-5cf698bb7b-gp87v\" (UID: \"97906db2-3b2d-44ec-af77-d3edf75b7f76\") " pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.008677 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.042781 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.876173 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.913054 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.913011555 podStartE2EDuration="10.913011555s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:26.34080098 +0000 UTC m=+1171.313112551" watchObservedRunningTime="2026-01-30 08:49:26.913011555 +0000 UTC m=+1171.885323106" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.917772 4758 generic.go:334] "Generic (PLEG): container finished" podID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerID="3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c" exitCode=143 Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.917903 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerDied","Data":"3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.938829 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05ff1946-6081-48fc-9474-e434068abc50","Type":"ContainerDied","Data":"3f904bbcb38b17c79b221e571054b48c5a966635c0e2f3068714106112c9f841"} Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.938906 4758 scope.go:117] "RemoveContainer" containerID="7383091d38cac724e66398a4b68f112aeb1b874ffd514f067ea5e896a6467eab" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.939403 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.963726 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.963790 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.963948 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964022 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964132 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964198 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964233 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.964281 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") pod \"05ff1946-6081-48fc-9474-e434068abc50\" (UID: \"05ff1946-6081-48fc-9474-e434068abc50\") " Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.967493 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.968499 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.974625 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs" (OuterVolumeSpecName: "logs") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:26 crc kubenswrapper[4758]: I0130 08:49:26.994268 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc" (OuterVolumeSpecName: "kube-api-access-mdsnc") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "kube-api-access-mdsnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.014566 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts" (OuterVolumeSpecName: "scripts") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.015524 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.102894 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.102933 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdsnc\" (UniqueName: \"kubernetes.io/projected/05ff1946-6081-48fc-9474-e434068abc50-kube-api-access-mdsnc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.102947 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05ff1946-6081-48fc-9474-e434068abc50-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.102990 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.121230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data" (OuterVolumeSpecName: "config-data") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.132066 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf698bb7b-gp87v"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.140064 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 08:49:27 crc kubenswrapper[4758]: W0130 08:49:27.142227 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97906db2_3b2d_44ec_af77_d3edf75b7f76.slice/crio-7ccf47e82c9ad8ec0bee063d5c40362df8e0161e6ee3b78270b8e187c419763b WatchSource:0}: Error finding container 7ccf47e82c9ad8ec0bee063d5c40362df8e0161e6ee3b78270b8e187c419763b: Status 404 returned error can't find the container with id 7ccf47e82c9ad8ec0bee063d5c40362df8e0161e6ee3b78270b8e187c419763b Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.172448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.205253 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.205278 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.205289 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.208743 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05ff1946-6081-48fc-9474-e434068abc50" (UID: "05ff1946-6081-48fc-9474-e434068abc50"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.276370 4758 scope.go:117] "RemoveContainer" containerID="b45fc4ecb4679d6da4f280f347e3f210ef87260ed8eae98565dd8a877be3d864" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.291386 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.295939 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.306924 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05ff1946-6081-48fc-9474-e434068abc50-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.332721 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:27 crc kubenswrapper[4758]: E0130 08:49:27.333260 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-httpd" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.333278 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-httpd" Jan 30 08:49:27 crc kubenswrapper[4758]: E0130 08:49:27.333312 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-log" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.333320 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-log" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.333504 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-log" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.333513 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ff1946-6081-48fc-9474-e434068abc50" containerName="glance-httpd" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.334522 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.338675 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.341809 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.373105 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411468 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411508 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411561 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411580 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411608 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411649 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411710 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.411727 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.440120 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517575 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517747 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517785 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517839 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.517900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518025 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518204 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518474 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518596 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518631 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.518685 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.522387 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.525198 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.533508 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.534986 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.546276 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.552812 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.668939 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.785456 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ff1946-6081-48fc-9474-e434068abc50" path="/var/lib/kubelet/pods/05ff1946-6081-48fc-9474-e434068abc50/volumes" Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.972693 4758 generic.go:334] "Generic (PLEG): container finished" podID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerID="0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083" exitCode=0 Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.972776 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerDied","Data":"0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083"} Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.987377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"7ccf47e82c9ad8ec0bee063d5c40362df8e0161e6ee3b78270b8e187c419763b"} Jan 30 08:49:27 crc kubenswrapper[4758]: I0130 08:49:27.991246 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"a0f4a512c849aa8b080efc203f40fc92f70f7018d35ed2e40b0a8bd341bb6b8b"} Jan 30 08:49:30 crc kubenswrapper[4758]: I0130 08:49:30.010886 4758 generic.go:334] "Generic (PLEG): container finished" podID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" containerID="2c3a333cae2d6b2084a8ea4de5cb47b1fe487040458de9660811e92c533cb616" exitCode=0 Jan 30 08:49:30 crc kubenswrapper[4758]: I0130 08:49:30.011076 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8tkhn" event={"ID":"f98ba341-0349-4a6f-ae1d-49f5a794d9c9","Type":"ContainerDied","Data":"2c3a333cae2d6b2084a8ea4de5cb47b1fe487040458de9660811e92c533cb616"} Jan 30 08:49:31 crc kubenswrapper[4758]: I0130 08:49:31.763211 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:49:31 crc kubenswrapper[4758]: I0130 08:49:31.845334 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:49:31 crc kubenswrapper[4758]: I0130 08:49:31.845591 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" containerID="cri-o://fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201" gracePeriod=10 Jan 30 08:49:32 crc kubenswrapper[4758]: I0130 08:49:32.036874 4758 generic.go:334] "Generic (PLEG): container finished" podID="05274cdb-49de-4144-85ca-3d46e1790dab" containerID="fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201" exitCode=0 Jan 30 08:49:32 crc kubenswrapper[4758]: I0130 08:49:32.036916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerDied","Data":"fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201"} Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.127566 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.835675 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.841956 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971690 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971782 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971818 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971869 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.971932 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972001 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972030 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972089 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972122 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972153 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972206 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972239 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972279 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") pod \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\" (UID: \"f98ba341-0349-4a6f-ae1d-49f5a794d9c9\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.972311 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") pod \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\" (UID: \"85d0c75c-159b-45a5-9d5d-a9030f2d06a6\") " Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.973243 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs" (OuterVolumeSpecName: "logs") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.974462 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.983935 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.992629 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb" (OuterVolumeSpecName: "kube-api-access-gqxlb") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "kube-api-access-gqxlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.993433 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:49:33 crc kubenswrapper[4758]: I0130 08:49:33.994395 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.006604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts" (OuterVolumeSpecName: "scripts") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.006664 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7" (OuterVolumeSpecName: "kube-api-access-7bgc7") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "kube-api-access-7bgc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.011558 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts" (OuterVolumeSpecName: "scripts") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.013766 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data" (OuterVolumeSpecName: "config-data") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.037248 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f98ba341-0349-4a6f-ae1d-49f5a794d9c9" (UID: "f98ba341-0349-4a6f-ae1d-49f5a794d9c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.047347 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.069204 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8tkhn" event={"ID":"f98ba341-0349-4a6f-ae1d-49f5a794d9c9","Type":"ContainerDied","Data":"bfb23d59a6ccdaa35ef6bbd15a521f36a60dd5c317e87318f0ba3a0e70190244"} Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.069247 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb23d59a6ccdaa35ef6bbd15a521f36a60dd5c317e87318f0ba3a0e70190244" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.069313 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8tkhn" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.075919 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076076 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076086 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076095 4758 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076105 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076114 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076125 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076135 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076144 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bgc7\" (UniqueName: \"kubernetes.io/projected/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-kube-api-access-7bgc7\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076168 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076179 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f98ba341-0349-4a6f-ae1d-49f5a794d9c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.076188 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqxlb\" (UniqueName: \"kubernetes.io/projected/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-kube-api-access-gqxlb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.082995 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d0c75c-159b-45a5-9d5d-a9030f2d06a6","Type":"ContainerDied","Data":"e0a1dba81e1c51abf7fa44611674e5a569a817032b5b33cdc56d53bd0efa011b"} Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.083304 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.087266 4758 scope.go:117] "RemoveContainer" containerID="0024bb61939ae591ee9587be4bfcbfa247433972e7c710fd8e8ca1c870431083" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.104869 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.110152 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.119372 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data" (OuterVolumeSpecName: "config-data") pod "85d0c75c-159b-45a5-9d5d-a9030f2d06a6" (UID: "85d0c75c-159b-45a5-9d5d-a9030f2d06a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.178266 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.178306 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.178316 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d0c75c-159b-45a5-9d5d-a9030f2d06a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.426853 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.446839 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463101 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:34 crc kubenswrapper[4758]: E0130 08:49:34.463555 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" containerName="keystone-bootstrap" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463572 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" containerName="keystone-bootstrap" Jan 30 08:49:34 crc kubenswrapper[4758]: E0130 08:49:34.463586 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-httpd" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463591 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-httpd" Jan 30 08:49:34 crc kubenswrapper[4758]: E0130 08:49:34.463599 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-log" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463606 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-log" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463764 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" containerName="keystone-bootstrap" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463783 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-httpd" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.463796 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" containerName="glance-log" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.464691 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.472849 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.475119 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.475385 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590640 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590707 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590738 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.590952 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.591002 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.591071 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.591099 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.692856 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693245 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693259 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693296 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693393 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693418 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693482 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693499 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.693788 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.694094 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.700860 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.701449 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.701564 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.703514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.717926 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.721713 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.803684 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:49:34 crc kubenswrapper[4758]: I0130 08:49:34.999664 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.019871 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8tkhn"] Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.054989 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.056169 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.059536 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.059944 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.060106 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w5f5m" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.060264 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.061157 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.066903 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209413 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209474 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209507 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209666 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.209689 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311252 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311335 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311354 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311438 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.311467 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.321566 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.321888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.322333 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.322748 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.322982 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.330191 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") pod \"keystone-bootstrap-slg8b\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.390648 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.790603 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d0c75c-159b-45a5-9d5d-a9030f2d06a6" path="/var/lib/kubelet/pods/85d0c75c-159b-45a5-9d5d-a9030f2d06a6/volumes" Jan 30 08:49:35 crc kubenswrapper[4758]: I0130 08:49:35.792493 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98ba341-0349-4a6f-ae1d-49f5a794d9c9" path="/var/lib/kubelet/pods/f98ba341-0349-4a6f-ae1d-49f5a794d9c9/volumes" Jan 30 08:49:39 crc kubenswrapper[4758]: E0130 08:49:39.037560 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 08:49:39 crc kubenswrapper[4758]: E0130 08:49:39.038225 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9bh8dhf7h77h579h4h666h5cch58bh5f8h67fh58chddh8h565h8fh644h656hf5h688hfbh667hd6h648h5bhcbh65h5d7h584h69h5cfh5d7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jltl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-596b6b9c4f-tqm7h_openstack(d1a59ac3-5eae-4e76-a7b0-5e3d4395515c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:39 crc kubenswrapper[4758]: E0130 08:49:39.040274 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-596b6b9c4f-tqm7h" podUID="d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.014104 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.014820 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n88hdfh547h5dbh5bdh59h674h5f4hc5h8h64dh75h584h584h4h8ch55bh96hch57hcfh77hc8h697h8fh684hf5h5cdh66dh7dh6h6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fq8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5755df7977-7khvs_openstack(57c2a333-014d-4c26-b459-fd88537d21ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.016666 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5755df7977-7khvs" podUID="57c2a333-014d-4c26-b459-fd88537d21ad" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.064183 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.064384 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f7h96h68fh559h8fh5fdh66bh5cdh656h55ch99h66bh567h549h99h599h57ch585h699hc8h5c4hdfhf8h9bh644hd8h567h5c9hf4h699hc7hc5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqdnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-57679b99fc-55gj9_openstack(6a53958b-ee42-4dea-af5a-086e825f672e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:41 crc kubenswrapper[4758]: E0130 08:49:41.066646 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-57679b99fc-55gj9" podUID="6a53958b-ee42-4dea-af5a-086e825f672e" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.083356 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159082 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159301 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159411 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159522 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" event={"ID":"05274cdb-49de-4144-85ca-3d46e1790dab","Type":"ContainerDied","Data":"f990ab28ecc68fe1f154128716943b17b2550ded5f1eb119ad695ca6b95e4ded"} Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159593 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159691 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.159774 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") pod \"05274cdb-49de-4144-85ca-3d46e1790dab\" (UID: \"05274cdb-49de-4144-85ca-3d46e1790dab\") " Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.195662 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t" (OuterVolumeSpecName: "kube-api-access-xl29t") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "kube-api-access-xl29t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.253564 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.272414 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl29t\" (UniqueName: \"kubernetes.io/projected/05274cdb-49de-4144-85ca-3d46e1790dab-kube-api-access-xl29t\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.272450 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.272845 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.274337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.288497 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config" (OuterVolumeSpecName: "config") pod "05274cdb-49de-4144-85ca-3d46e1790dab" (UID: "05274cdb-49de-4144-85ca-3d46e1790dab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.374696 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.374735 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.374749 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05274cdb-49de-4144-85ca-3d46e1790dab-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.546259 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.568741 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-98tgp"] Jan 30 08:49:41 crc kubenswrapper[4758]: I0130 08:49:41.782677 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" path="/var/lib/kubelet/pods/05274cdb-49de-4144-85ca-3d46e1790dab/volumes" Jan 30 08:49:43 crc kubenswrapper[4758]: I0130 08:49:43.127109 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-98tgp" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: i/o timeout" Jan 30 08:49:45 crc kubenswrapper[4758]: I0130 08:49:45.194447 4758 generic.go:334] "Generic (PLEG): container finished" podID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" containerID="a887d904bf88f7531d24fd0632b3980599f13f39af4a57d591d1cab59676a5bb" exitCode=0 Jan 30 08:49:45 crc kubenswrapper[4758]: I0130 08:49:45.194526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5hrj" event={"ID":"b166e095-ba6b-443f-8c0a-0e83bb698ccd","Type":"ContainerDied","Data":"a887d904bf88f7531d24fd0632b3980599f13f39af4a57d591d1cab59676a5bb"} Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.235536 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.248759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-596b6b9c4f-tqm7h" event={"ID":"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c","Type":"ContainerDied","Data":"4c28043680fea053f92ce9f17f24055029f33da223c2e4879e135172e9b327fa"} Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.248824 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-596b6b9c4f-tqm7h" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282307 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282435 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282464 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282509 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282632 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") pod \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\" (UID: \"d1a59ac3-5eae-4e76-a7b0-5e3d4395515c\") " Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282995 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs" (OuterVolumeSpecName: "logs") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.282866 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts" (OuterVolumeSpecName: "scripts") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.283094 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data" (OuterVolumeSpecName: "config-data") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.283458 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.283480 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.283493 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.296454 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.296706 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6" (OuterVolumeSpecName: "kube-api-access-jltl6") pod "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" (UID: "d1a59ac3-5eae-4e76-a7b0-5e3d4395515c"). InnerVolumeSpecName "kube-api-access-jltl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.385167 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jltl6\" (UniqueName: \"kubernetes.io/projected/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-kube-api-access-jltl6\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.385200 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.621492 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.631694 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-596b6b9c4f-tqm7h"] Jan 30 08:49:50 crc kubenswrapper[4758]: E0130 08:49:50.817585 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 08:49:50 crc kubenswrapper[4758]: E0130 08:49:50.817776 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-c24lg_openstack(12ff21aa-edae-4f56-a2ea-be0deb2d84d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:50 crc kubenswrapper[4758]: E0130 08:49:50.819809 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-c24lg" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.881383 4758 scope.go:117] "RemoveContainer" containerID="3b85b634aa2d9351742fa8b9b33587be915cb87e3c8564f41b0c1fb679584f2c" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.921575 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.929606 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:50 crc kubenswrapper[4758]: I0130 08:49:50.947703 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001407 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001488 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001536 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001684 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001738 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001809 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") pod \"57c2a333-014d-4c26-b459-fd88537d21ad\" (UID: \"57c2a333-014d-4c26-b459-fd88537d21ad\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001845 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001886 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") pod \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001914 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") pod \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001940 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") pod \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\" (UID: \"b166e095-ba6b-443f-8c0a-0e83bb698ccd\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.001990 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.002011 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") pod \"6a53958b-ee42-4dea-af5a-086e825f672e\" (UID: \"6a53958b-ee42-4dea-af5a-086e825f672e\") " Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.002168 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts" (OuterVolumeSpecName: "scripts") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.002829 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data" (OuterVolumeSpecName: "config-data") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.003307 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.003349 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.003638 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs" (OuterVolumeSpecName: "logs") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.004252 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m" (OuterVolumeSpecName: "kube-api-access-7fq8m") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "kube-api-access-7fq8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.004859 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs" (OuterVolumeSpecName: "logs") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.004900 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data" (OuterVolumeSpecName: "config-data") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.004967 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.005475 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts" (OuterVolumeSpecName: "scripts") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.006219 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft" (OuterVolumeSpecName: "kube-api-access-v2rft") pod "b166e095-ba6b-443f-8c0a-0e83bb698ccd" (UID: "b166e095-ba6b-443f-8c0a-0e83bb698ccd"). InnerVolumeSpecName "kube-api-access-v2rft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.006266 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc" (OuterVolumeSpecName: "kube-api-access-bqdnc") pod "6a53958b-ee42-4dea-af5a-086e825f672e" (UID: "6a53958b-ee42-4dea-af5a-086e825f672e"). InnerVolumeSpecName "kube-api-access-bqdnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.006955 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "57c2a333-014d-4c26-b459-fd88537d21ad" (UID: "57c2a333-014d-4c26-b459-fd88537d21ad"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.025387 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b166e095-ba6b-443f-8c0a-0e83bb698ccd" (UID: "b166e095-ba6b-443f-8c0a-0e83bb698ccd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.026823 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config" (OuterVolumeSpecName: "config") pod "b166e095-ba6b-443f-8c0a-0e83bb698ccd" (UID: "b166e095-ba6b-443f-8c0a-0e83bb698ccd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104544 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqdnc\" (UniqueName: \"kubernetes.io/projected/6a53958b-ee42-4dea-af5a-086e825f672e-kube-api-access-bqdnc\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104582 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104592 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2rft\" (UniqueName: \"kubernetes.io/projected/b166e095-ba6b-443f-8c0a-0e83bb698ccd-kube-api-access-v2rft\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104602 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b166e095-ba6b-443f-8c0a-0e83bb698ccd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104610 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a53958b-ee42-4dea-af5a-086e825f672e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104619 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a53958b-ee42-4dea-af5a-086e825f672e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104628 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fq8m\" (UniqueName: \"kubernetes.io/projected/57c2a333-014d-4c26-b459-fd88537d21ad-kube-api-access-7fq8m\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104635 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a53958b-ee42-4dea-af5a-086e825f672e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104643 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c2a333-014d-4c26-b459-fd88537d21ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104652 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57c2a333-014d-4c26-b459-fd88537d21ad-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.104659 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57c2a333-014d-4c26-b459-fd88537d21ad-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.261869 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-z5hrj" event={"ID":"b166e095-ba6b-443f-8c0a-0e83bb698ccd","Type":"ContainerDied","Data":"f883e7737939e0c54e462c2bd84b323261d5728c320e964b9fa1343f84ca779a"} Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.262141 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f883e7737939e0c54e462c2bd84b323261d5728c320e964b9fa1343f84ca779a" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.262186 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-z5hrj" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.263740 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57679b99fc-55gj9" event={"ID":"6a53958b-ee42-4dea-af5a-086e825f672e","Type":"ContainerDied","Data":"8b214db64125952356223045bfea0cadf16d3ed9ce368671c5c068061d4b7fe9"} Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.263833 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57679b99fc-55gj9" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.271117 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5755df7977-7khvs" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.271724 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5755df7977-7khvs" event={"ID":"57c2a333-014d-4c26-b459-fd88537d21ad","Type":"ContainerDied","Data":"5f75c2b63d8a8e525916006c6cde4c59530db436c380e604f17c1c8a8b4e28b8"} Jan 30 08:49:51 crc kubenswrapper[4758]: E0130 08:49:51.278338 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-c24lg" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.398025 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.411854 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57679b99fc-55gj9"] Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.430275 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.457804 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5755df7977-7khvs"] Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.778985 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c2a333-014d-4c26-b459-fd88537d21ad" path="/var/lib/kubelet/pods/57c2a333-014d-4c26-b459-fd88537d21ad/volumes" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.779557 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a53958b-ee42-4dea-af5a-086e825f672e" path="/var/lib/kubelet/pods/6a53958b-ee42-4dea-af5a-086e825f672e/volumes" Jan 30 08:49:51 crc kubenswrapper[4758]: I0130 08:49:51.779989 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a59ac3-5eae-4e76-a7b0-5e3d4395515c" path="/var/lib/kubelet/pods/d1a59ac3-5eae-4e76-a7b0-5e3d4395515c/volumes" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.247463 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:49:52 crc kubenswrapper[4758]: E0130 08:49:52.248115 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="init" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248212 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="init" Jan 30 08:49:52 crc kubenswrapper[4758]: E0130 08:49:52.248282 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" containerName="neutron-db-sync" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248350 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" containerName="neutron-db-sync" Jan 30 08:49:52 crc kubenswrapper[4758]: E0130 08:49:52.248450 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248522 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248765 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="05274cdb-49de-4144-85ca-3d46e1790dab" containerName="dnsmasq-dns" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.248837 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" containerName="neutron-db-sync" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.249720 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.271005 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338149 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338404 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338508 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338614 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.338722 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.428795 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.444639 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.440595 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.445309 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.445329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.445429 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.445453 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.446610 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.441393 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.447798 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.448404 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.460725 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-t9t64" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.460964 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.461099 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.461249 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.476768 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.491488 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") pod \"dnsmasq-dns-b6c948c7-d5b7q\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551167 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551222 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551316 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551335 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.551401 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.576744 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652697 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652726 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652820 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.652841 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.658761 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.658979 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.658883 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.677060 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.678514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") pod \"neutron-645778c498-xt8kb\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:52 crc kubenswrapper[4758]: I0130 08:49:52.842731 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.404512 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.412289 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.415551 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.418651 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.436215 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501353 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501427 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501471 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501536 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501729 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.501763 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.602987 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603135 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603171 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603203 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.603324 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.611191 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.612763 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.612925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.613941 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.625264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.629570 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.633898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") pod \"neutron-6bcffb56d9-w524k\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:54 crc kubenswrapper[4758]: I0130 08:49:54.743275 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:49:57 crc kubenswrapper[4758]: E0130 08:49:57.241221 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 08:49:57 crc kubenswrapper[4758]: E0130 08:49:57.242173 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rvklh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-x2v6d_openstack(25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 08:49:57 crc kubenswrapper[4758]: E0130 08:49:57.243784 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-x2v6d" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" Jan 30 08:49:57 crc kubenswrapper[4758]: I0130 08:49:57.407534 4758 scope.go:117] "RemoveContainer" containerID="fbe24d5e6be67695cb5ddcaebff818aae946840f68728ace533f14d150ec3201" Jan 30 08:49:57 crc kubenswrapper[4758]: E0130 08:49:57.432265 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-x2v6d" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" Jan 30 08:49:57 crc kubenswrapper[4758]: I0130 08:49:57.644704 4758 scope.go:117] "RemoveContainer" containerID="04574457799aeff7875482dc2fc9ef4ba7fb15bf31c64fb74962ff0458496cd3" Jan 30 08:49:57 crc kubenswrapper[4758]: I0130 08:49:57.691305 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:49:57 crc kubenswrapper[4758]: W0130 08:49:57.707516 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af263c7_b4ef_4cd9_bf61_2caa6ce1a43f.slice/crio-09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06 WatchSource:0}: Error finding container 09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06: Status 404 returned error can't find the container with id 09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06 Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.025229 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.148976 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.282709 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.524434 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.539618 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-slg8b" event={"ID":"419e16c4-297d-490a-8fd3-6d365e20f5f2","Type":"ContainerStarted","Data":"d8412f7cc1a8a2ecba1258efe870b73eea12fefd558cd99ff2489cbead4bab2c"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.544526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerStarted","Data":"b5a75a63374b225cb63a7afd682c9ae7eaaa46ad8ba68384e44dabbcfac3a2df"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.555448 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerStarted","Data":"a7359b09c33ee03bcd96bf4b4d5dd4f4f882e30e5f48059a01c80017926d9869"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.563513 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerStarted","Data":"09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.582674 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lqbmm" event={"ID":"11f7c236-867a-465b-9514-de6a765b312b","Type":"ContainerStarted","Data":"23862bcdc0458af24e0606a4eedaf48106778403061437cff21803ebeee27a94"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.628976 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.635481 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"dc846a1b60c49ae0094d7935fda041f5ad64e15973405e5847dcfee6d2733586"} Jan 30 08:49:58 crc kubenswrapper[4758]: I0130 08:49:58.644281 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lqbmm" podStartSLOduration=9.977438305 podStartE2EDuration="42.644254487s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="2026-01-30 08:49:18.193645759 +0000 UTC m=+1163.165957300" lastFinishedPulling="2026-01-30 08:49:50.860461931 +0000 UTC m=+1195.832773482" observedRunningTime="2026-01-30 08:49:58.607341183 +0000 UTC m=+1203.579652744" watchObservedRunningTime="2026-01-30 08:49:58.644254487 +0000 UTC m=+1203.616566038" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.434609 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.672709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerStarted","Data":"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.678887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.694437 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.695021 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.705395 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerStarted","Data":"48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.706090 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerStarted","Data":"d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.706137 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerStarted","Data":"e0c5305909e72f305e168967cbd28a761d5403564b6070027fcc64369e555331"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.706708 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5cf698bb7b-gp87v" podStartSLOduration=4.677959663 podStartE2EDuration="34.70668663s" podCreationTimestamp="2026-01-30 08:49:25 +0000 UTC" firstStartedPulling="2026-01-30 08:49:27.149890752 +0000 UTC m=+1172.122202303" lastFinishedPulling="2026-01-30 08:49:57.178617719 +0000 UTC m=+1202.150929270" observedRunningTime="2026-01-30 08:49:59.706477043 +0000 UTC m=+1204.678788614" watchObservedRunningTime="2026-01-30 08:49:59.70668663 +0000 UTC m=+1204.678998181" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.706993 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.720521 4758 generic.go:334] "Generic (PLEG): container finished" podID="062c9394-cb5b-4768-b71f-2965c61905b8" containerID="c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035" exitCode=0 Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.720613 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerDied","Data":"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.724584 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerStarted","Data":"7e2b0cfeefa2f941937553b5eb918279c40a00098408904e6b1021d27662c44b"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.727615 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerStarted","Data":"c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.758060 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.759164 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-645778c498-xt8kb" podStartSLOduration=7.759146756 podStartE2EDuration="7.759146756s" podCreationTimestamp="2026-01-30 08:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:59.750542254 +0000 UTC m=+1204.722853825" watchObservedRunningTime="2026-01-30 08:49:59.759146756 +0000 UTC m=+1204.731458307" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.803009 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-slg8b" event={"ID":"419e16c4-297d-490a-8fd3-6d365e20f5f2","Type":"ContainerStarted","Data":"c95e7220fa0d1725e338904a7e29c2f7e1c50ca5a270bfbf0b0819abddbe5c04"} Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.846685 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-76fc974bd8-4mnvj" podStartSLOduration=4.835761378 podStartE2EDuration="34.846658828s" podCreationTimestamp="2026-01-30 08:49:25 +0000 UTC" firstStartedPulling="2026-01-30 08:49:27.411581399 +0000 UTC m=+1172.383892950" lastFinishedPulling="2026-01-30 08:49:57.422478849 +0000 UTC m=+1202.394790400" observedRunningTime="2026-01-30 08:49:59.781858508 +0000 UTC m=+1204.754170079" watchObservedRunningTime="2026-01-30 08:49:59.846658828 +0000 UTC m=+1204.818970379" Jan 30 08:49:59 crc kubenswrapper[4758]: I0130 08:49:59.859963 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-slg8b" podStartSLOduration=24.85993894 podStartE2EDuration="24.85993894s" podCreationTimestamp="2026-01-30 08:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:49:59.855599672 +0000 UTC m=+1204.827911223" watchObservedRunningTime="2026-01-30 08:49:59.85993894 +0000 UTC m=+1204.832250491" Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.815360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerStarted","Data":"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b"} Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.817184 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.828665 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerStarted","Data":"0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b"} Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.837915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerStarted","Data":"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea"} Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.849866 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" podStartSLOduration=8.849842989 podStartE2EDuration="8.849842989s" podCreationTimestamp="2026-01-30 08:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:00.839567452 +0000 UTC m=+1205.811879013" watchObservedRunningTime="2026-01-30 08:50:00.849842989 +0000 UTC m=+1205.822154540" Jan 30 08:50:00 crc kubenswrapper[4758]: I0130 08:50:00.873978 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=33.873936744 podStartE2EDuration="33.873936744s" podCreationTimestamp="2026-01-30 08:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:00.866855489 +0000 UTC m=+1205.839167060" watchObservedRunningTime="2026-01-30 08:50:00.873936744 +0000 UTC m=+1205.846248305" Jan 30 08:50:01 crc kubenswrapper[4758]: I0130 08:50:01.886141 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerStarted","Data":"133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3"} Jan 30 08:50:01 crc kubenswrapper[4758]: I0130 08:50:01.971600 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.971583527 podStartE2EDuration="27.971583527s" podCreationTimestamp="2026-01-30 08:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:01.945512409 +0000 UTC m=+1206.917823960" watchObservedRunningTime="2026-01-30 08:50:01.971583527 +0000 UTC m=+1206.943895078" Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.918628 4758 generic.go:334] "Generic (PLEG): container finished" podID="11f7c236-867a-465b-9514-de6a765b312b" containerID="23862bcdc0458af24e0606a4eedaf48106778403061437cff21803ebeee27a94" exitCode=0 Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.919006 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lqbmm" event={"ID":"11f7c236-867a-465b-9514-de6a765b312b","Type":"ContainerDied","Data":"23862bcdc0458af24e0606a4eedaf48106778403061437cff21803ebeee27a94"} Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.931499 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerStarted","Data":"5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4"} Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.932421 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.951068 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8"} Jan 30 08:50:02 crc kubenswrapper[4758]: I0130 08:50:02.978607 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bcffb56d9-w524k" podStartSLOduration=8.978587289 podStartE2EDuration="8.978587289s" podCreationTimestamp="2026-01-30 08:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:02.975933014 +0000 UTC m=+1207.948244575" watchObservedRunningTime="2026-01-30 08:50:02.978587289 +0000 UTC m=+1207.950898840" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.410598 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lqbmm" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528028 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528493 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528597 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528622 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.528668 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") pod \"11f7c236-867a-465b-9514-de6a765b312b\" (UID: \"11f7c236-867a-465b-9514-de6a765b312b\") " Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.529492 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs" (OuterVolumeSpecName: "logs") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.539175 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts" (OuterVolumeSpecName: "scripts") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.543182 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv" (OuterVolumeSpecName: "kube-api-access-wmcjv") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "kube-api-access-wmcjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.566088 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.566493 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data" (OuterVolumeSpecName: "config-data") pod "11f7c236-867a-465b-9514-de6a765b312b" (UID: "11f7c236-867a-465b-9514-de6a765b312b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631509 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631538 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631569 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmcjv\" (UniqueName: \"kubernetes.io/projected/11f7c236-867a-465b-9514-de6a765b312b-kube-api-access-wmcjv\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631579 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11f7c236-867a-465b-9514-de6a765b312b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.631589 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11f7c236-867a-465b-9514-de6a765b312b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.803936 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.803991 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.804007 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.804016 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.864022 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:50:04 crc kubenswrapper[4758]: I0130 08:50:04.865817 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:04.998190 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lqbmm" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:04.998016 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lqbmm" event={"ID":"11f7c236-867a-465b-9514-de6a765b312b","Type":"ContainerDied","Data":"846fcd3c2e04f783d1345c62f282c95a03f7f69db8ece0d61ccbe690d3fc4153"} Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.000678 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846fcd3c2e04f783d1345c62f282c95a03f7f69db8ece0d61ccbe690d3fc4153" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.092776 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:50:05 crc kubenswrapper[4758]: E0130 08:50:05.094697 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f7c236-867a-465b-9514-de6a765b312b" containerName="placement-db-sync" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.094739 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f7c236-867a-465b-9514-de6a765b312b" containerName="placement-db-sync" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.095017 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f7c236-867a-465b-9514-de6a765b312b" containerName="placement-db-sync" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.096591 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.102171 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.102319 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.102492 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.102834 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tdg2m" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.103018 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.139485 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.252731 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.252803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.252838 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.252861 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.253207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.253328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.253547 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355450 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355484 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355522 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355550 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355624 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.355657 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.360111 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.360465 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.361247 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.362116 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.364244 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.366648 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.381738 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") pod \"placement-6bdfdc4b-wwqnt\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:05 crc kubenswrapper[4758]: I0130 08:50:05.430088 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:06 crc kubenswrapper[4758]: I0130 08:50:06.009402 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:50:06 crc kubenswrapper[4758]: I0130 08:50:06.009464 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:50:06 crc kubenswrapper[4758]: I0130 08:50:06.043140 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:50:06 crc kubenswrapper[4758]: I0130 08:50:06.043223 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.022716 4758 generic.go:334] "Generic (PLEG): container finished" podID="419e16c4-297d-490a-8fd3-6d365e20f5f2" containerID="c95e7220fa0d1725e338904a7e29c2f7e1c50ca5a270bfbf0b0819abddbe5c04" exitCode=0 Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.022817 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-slg8b" event={"ID":"419e16c4-297d-490a-8fd3-6d365e20f5f2","Type":"ContainerDied","Data":"c95e7220fa0d1725e338904a7e29c2f7e1c50ca5a270bfbf0b0819abddbe5c04"} Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.578554 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.666839 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.667094 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56798b757f-7trbn" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="dnsmasq-dns" containerID="cri-o://f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be" gracePeriod=10 Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.669930 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.669978 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.757323 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:07 crc kubenswrapper[4758]: I0130 08:50:07.766283 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:08 crc kubenswrapper[4758]: I0130 08:50:08.043378 4758 generic.go:334] "Generic (PLEG): container finished" podID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerID="f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be" exitCode=0 Jan 30 08:50:08 crc kubenswrapper[4758]: I0130 08:50:08.043632 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerDied","Data":"f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be"} Jan 30 08:50:08 crc kubenswrapper[4758]: I0130 08:50:08.044840 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:08 crc kubenswrapper[4758]: I0130 08:50:08.044863 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:10 crc kubenswrapper[4758]: I0130 08:50:10.056087 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:50:10 crc kubenswrapper[4758]: I0130 08:50:10.056646 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:50:10 crc kubenswrapper[4758]: I0130 08:50:10.959743 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.123308 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.123438 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.123587 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.131992 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.132591 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.132793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") pod \"419e16c4-297d-490a-8fd3-6d365e20f5f2\" (UID: \"419e16c4-297d-490a-8fd3-6d365e20f5f2\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.153010 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd" (OuterVolumeSpecName: "kube-api-access-ln7jd") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "kube-api-access-ln7jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.153500 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.162730 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts" (OuterVolumeSpecName: "scripts") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.180018 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.182520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-slg8b" event={"ID":"419e16c4-297d-490a-8fd3-6d365e20f5f2","Type":"ContainerDied","Data":"d8412f7cc1a8a2ecba1258efe870b73eea12fefd558cd99ff2489cbead4bab2c"} Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.182600 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8412f7cc1a8a2ecba1258efe870b73eea12fefd558cd99ff2489cbead4bab2c" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.182663 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-slg8b" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.218156 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data" (OuterVolumeSpecName: "config-data") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237289 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237356 4758 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237368 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln7jd\" (UniqueName: \"kubernetes.io/projected/419e16c4-297d-490a-8fd3-6d365e20f5f2-kube-api-access-ln7jd\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237376 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.237386 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.246558 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "419e16c4-297d-490a-8fd3-6d365e20f5f2" (UID: "419e16c4-297d-490a-8fd3-6d365e20f5f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.339797 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/419e16c4-297d-490a-8fd3-6d365e20f5f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.394028 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.552539 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.553115 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.553195 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.553262 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.553292 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") pod \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\" (UID: \"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06\") " Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.576768 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj" (OuterVolumeSpecName: "kube-api-access-9lrbj") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "kube-api-access-9lrbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.599266 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:50:11 crc kubenswrapper[4758]: W0130 08:50:11.653420 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82fd1f36_9f4f_441f_959d_e2eddc79c99b.slice/crio-d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287 WatchSource:0}: Error finding container d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287: Status 404 returned error can't find the container with id d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287 Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.655071 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lrbj\" (UniqueName: \"kubernetes.io/projected/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-kube-api-access-9lrbj\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.708401 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.708553 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.710439 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.747598 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.747718 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.817849 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config" (OuterVolumeSpecName: "config") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.856618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.862440 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.863666 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.863792 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.863803 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.907295 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" (UID: "0f95b5bb-621a-4c74-bc39-a5aea5ef4a06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:11 crc kubenswrapper[4758]: I0130 08:50:11.972072 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.210221 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96"} Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.224728 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c24lg" event={"ID":"12ff21aa-edae-4f56-a2ea-be0deb2d84d7","Type":"ContainerStarted","Data":"ce042854a2f3fd058ffc8182aa4289802d19b49b10f8800c0daceab9556857f3"} Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.250244 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-7trbn" event={"ID":"0f95b5bb-621a-4c74-bc39-a5aea5ef4a06","Type":"ContainerDied","Data":"21f1c5561b1c8dc8979b30b5fd4e9ebe5b905ba1454066061809e405ff87c8bc"} Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.250298 4758 scope.go:117] "RemoveContainer" containerID="f78641dc0545684ea84c19e7528fb1e66fc58f4b4489db98333bbe75698bd8be" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.250425 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-7trbn" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.252475 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-546cd7df57-wnwgz"] Jan 30 08:50:12 crc kubenswrapper[4758]: E0130 08:50:12.253307 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="init" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253325 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="init" Jan 30 08:50:12 crc kubenswrapper[4758]: E0130 08:50:12.253339 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="419e16c4-297d-490a-8fd3-6d365e20f5f2" containerName="keystone-bootstrap" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253347 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="419e16c4-297d-490a-8fd3-6d365e20f5f2" containerName="keystone-bootstrap" Jan 30 08:50:12 crc kubenswrapper[4758]: E0130 08:50:12.253364 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="dnsmasq-dns" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253393 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="dnsmasq-dns" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253587 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="419e16c4-297d-490a-8fd3-6d365e20f5f2" containerName="keystone-bootstrap" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.253614 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" containerName="dnsmasq-dns" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.264148 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.282368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerStarted","Data":"d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287"} Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.289491 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.290230 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.290427 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w5f5m" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.290561 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.295280 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.296678 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.297923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-fernet-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298028 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkmf\" (UniqueName: \"kubernetes.io/projected/283998cc-90b9-49fb-91f5-7cfd514603d0-kube-api-access-pxkmf\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298130 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-internal-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298155 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-config-data\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298252 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-scripts\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-credential-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-public-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.298349 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-combined-ca-bundle\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.366312 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-c24lg" podStartSLOduration=6.646770495 podStartE2EDuration="56.366290024s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="2026-01-30 08:49:21.251204505 +0000 UTC m=+1166.223516056" lastFinishedPulling="2026-01-30 08:50:10.970724034 +0000 UTC m=+1215.943035585" observedRunningTime="2026-01-30 08:50:12.263761836 +0000 UTC m=+1217.236073407" watchObservedRunningTime="2026-01-30 08:50:12.366290024 +0000 UTC m=+1217.338601575" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404468 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxkmf\" (UniqueName: \"kubernetes.io/projected/283998cc-90b9-49fb-91f5-7cfd514603d0-kube-api-access-pxkmf\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404538 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-internal-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404564 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-config-data\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404628 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-scripts\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404664 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-credential-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-public-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404723 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-combined-ca-bundle\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.404776 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-fernet-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.411482 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-scripts\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.414488 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-internal-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.416279 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-public-tls-certs\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.450641 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-credential-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.456779 4758 scope.go:117] "RemoveContainer" containerID="b3ea3c454168822443d7f32c74a100f615de44f286974f28516fa17d776a4519" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.467026 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.468758 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxkmf\" (UniqueName: \"kubernetes.io/projected/283998cc-90b9-49fb-91f5-7cfd514603d0-kube-api-access-pxkmf\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.490587 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-config-data\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.491960 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-fernet-keys\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.539318 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-546cd7df57-wnwgz"] Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.550178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283998cc-90b9-49fb-91f5-7cfd514603d0-combined-ca-bundle\") pod \"keystone-546cd7df57-wnwgz\" (UID: \"283998cc-90b9-49fb-91f5-7cfd514603d0\") " pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.585744 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.605355 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-7trbn"] Jan 30 08:50:12 crc kubenswrapper[4758]: I0130 08:50:12.627825 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.311657 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x2v6d" event={"ID":"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3","Type":"ContainerStarted","Data":"c783ba41ba97d546267a2a7d551d57bf2d634c9498d0ec5c9365e656091583d9"} Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.326490 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-546cd7df57-wnwgz"] Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.344224 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerStarted","Data":"21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7"} Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.344262 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerStarted","Data":"ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19"} Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.345280 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.345320 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.347153 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-x2v6d" podStartSLOduration=7.275150216 podStartE2EDuration="57.347136335s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="2026-01-30 08:49:21.269911401 +0000 UTC m=+1166.242222952" lastFinishedPulling="2026-01-30 08:50:11.34189752 +0000 UTC m=+1216.314209071" observedRunningTime="2026-01-30 08:50:13.336807637 +0000 UTC m=+1218.309119198" watchObservedRunningTime="2026-01-30 08:50:13.347136335 +0000 UTC m=+1218.319447886" Jan 30 08:50:13 crc kubenswrapper[4758]: W0130 08:50:13.367081 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod283998cc_90b9_49fb_91f5_7cfd514603d0.slice/crio-f20d1a36680eb22ba48f6cd821710d9a4387937b78414002cb116675e1d2c1ab WatchSource:0}: Error finding container f20d1a36680eb22ba48f6cd821710d9a4387937b78414002cb116675e1d2c1ab: Status 404 returned error can't find the container with id f20d1a36680eb22ba48f6cd821710d9a4387937b78414002cb116675e1d2c1ab Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.377929 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bdfdc4b-wwqnt" podStartSLOduration=8.377909743 podStartE2EDuration="8.377909743s" podCreationTimestamp="2026-01-30 08:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:13.376634852 +0000 UTC m=+1218.348946413" watchObservedRunningTime="2026-01-30 08:50:13.377909743 +0000 UTC m=+1218.350221294" Jan 30 08:50:13 crc kubenswrapper[4758]: I0130 08:50:13.781997 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f95b5bb-621a-4c74-bc39-a5aea5ef4a06" path="/var/lib/kubelet/pods/0f95b5bb-621a-4c74-bc39-a5aea5ef4a06/volumes" Jan 30 08:50:14 crc kubenswrapper[4758]: I0130 08:50:14.363434 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-546cd7df57-wnwgz" event={"ID":"283998cc-90b9-49fb-91f5-7cfd514603d0","Type":"ContainerStarted","Data":"70e951145b350707e00f5be9c396d9f13b7c6cd4b7a47ae4d6d6493a86522538"} Jan 30 08:50:14 crc kubenswrapper[4758]: I0130 08:50:14.363471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-546cd7df57-wnwgz" event={"ID":"283998cc-90b9-49fb-91f5-7cfd514603d0","Type":"ContainerStarted","Data":"f20d1a36680eb22ba48f6cd821710d9a4387937b78414002cb116675e1d2c1ab"} Jan 30 08:50:14 crc kubenswrapper[4758]: I0130 08:50:14.363496 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:14 crc kubenswrapper[4758]: I0130 08:50:14.403444 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-546cd7df57-wnwgz" podStartSLOduration=2.403417434 podStartE2EDuration="2.403417434s" podCreationTimestamp="2026-01-30 08:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:14.398737464 +0000 UTC m=+1219.371049015" watchObservedRunningTime="2026-01-30 08:50:14.403417434 +0000 UTC m=+1219.375728985" Jan 30 08:50:16 crc kubenswrapper[4758]: I0130 08:50:16.011830 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:50:16 crc kubenswrapper[4758]: I0130 08:50:16.045796 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:50:17 crc kubenswrapper[4758]: E0130 08:50:17.027129 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="f978baf9-b7c0-4d25-8bca-e95a018ba2af" Jan 30 08:50:17 crc kubenswrapper[4758]: I0130 08:50:17.403558 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:50:19 crc kubenswrapper[4758]: I0130 08:50:19.433834 4758 generic.go:334] "Generic (PLEG): container finished" podID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" containerID="ce042854a2f3fd058ffc8182aa4289802d19b49b10f8800c0daceab9556857f3" exitCode=0 Jan 30 08:50:19 crc kubenswrapper[4758]: I0130 08:50:19.434412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c24lg" event={"ID":"12ff21aa-edae-4f56-a2ea-be0deb2d84d7","Type":"ContainerDied","Data":"ce042854a2f3fd058ffc8182aa4289802d19b49b10f8800c0daceab9556857f3"} Jan 30 08:50:22 crc kubenswrapper[4758]: I0130 08:50:22.051507 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:50:22 crc kubenswrapper[4758]: E0130 08:50:22.051767 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:50:22 crc kubenswrapper[4758]: E0130 08:50:22.052187 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:50:22 crc kubenswrapper[4758]: E0130 08:50:22.052272 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:52:24.052242318 +0000 UTC m=+1349.024553869 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:50:22 crc kubenswrapper[4758]: I0130 08:50:22.851655 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.159600 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.163782 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bcffb56d9-w524k" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-api" containerID="cri-o://0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b" gracePeriod=30 Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.164189 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6bcffb56d9-w524k" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" containerID="cri-o://5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4" gracePeriod=30 Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.188141 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.235225 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f9c8c6ff5-f2sb7"] Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.237765 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.267494 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f9c8c6ff5-f2sb7"] Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399644 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtpz\" (UniqueName: \"kubernetes.io/projected/0bacb926-f58c-4c06-870a-633b7a3795c5-kube-api-access-xvtpz\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399741 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-internal-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399819 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-public-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399874 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-combined-ca-bundle\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399898 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-ovndb-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399950 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-httpd-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.399988 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.480859 4758 generic.go:334] "Generic (PLEG): container finished" podID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" containerID="c783ba41ba97d546267a2a7d551d57bf2d634c9498d0ec5c9365e656091583d9" exitCode=0 Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.480936 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x2v6d" event={"ID":"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3","Type":"ContainerDied","Data":"c783ba41ba97d546267a2a7d551d57bf2d634c9498d0ec5c9365e656091583d9"} Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.503524 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-internal-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.503619 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-public-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.503689 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-combined-ca-bundle\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.503718 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-ovndb-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.509139 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-httpd-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.509298 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.509555 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtpz\" (UniqueName: \"kubernetes.io/projected/0bacb926-f58c-4c06-870a-633b7a3795c5-kube-api-access-xvtpz\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.519445 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.520771 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-combined-ca-bundle\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.520921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-ovndb-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.523957 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-internal-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.525708 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-httpd-config\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.526811 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bacb926-f58c-4c06-870a-633b7a3795c5-public-tls-certs\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.534637 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtpz\" (UniqueName: \"kubernetes.io/projected/0bacb926-f58c-4c06-870a-633b7a3795c5-kube-api-access-xvtpz\") pod \"neutron-6f9c8c6ff5-f2sb7\" (UID: \"0bacb926-f58c-4c06-870a-633b7a3795c5\") " pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.564652 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:23 crc kubenswrapper[4758]: I0130 08:50:23.891503 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c24lg" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.027456 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") pod \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.027980 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") pod \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.028204 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") pod \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\" (UID: \"12ff21aa-edae-4f56-a2ea-be0deb2d84d7\") " Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.033095 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "12ff21aa-edae-4f56-a2ea-be0deb2d84d7" (UID: "12ff21aa-edae-4f56-a2ea-be0deb2d84d7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.050571 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf" (OuterVolumeSpecName: "kube-api-access-v55sf") pod "12ff21aa-edae-4f56-a2ea-be0deb2d84d7" (UID: "12ff21aa-edae-4f56-a2ea-be0deb2d84d7"). InnerVolumeSpecName "kube-api-access-v55sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.058472 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12ff21aa-edae-4f56-a2ea-be0deb2d84d7" (UID: "12ff21aa-edae-4f56-a2ea-be0deb2d84d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.132659 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v55sf\" (UniqueName: \"kubernetes.io/projected/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-kube-api-access-v55sf\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.132704 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.132713 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ff21aa-edae-4f56-a2ea-be0deb2d84d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.497782 4758 generic.go:334] "Generic (PLEG): container finished" podID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerID="5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4" exitCode=0 Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.497979 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerDied","Data":"5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4"} Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.500284 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c24lg" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.500964 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c24lg" event={"ID":"12ff21aa-edae-4f56-a2ea-be0deb2d84d7","Type":"ContainerDied","Data":"56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2"} Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.501000 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56863fe421871dd9708f2fb55d75c349707fee747f1bbd765e9d58a33bee80d2" Jan 30 08:50:24 crc kubenswrapper[4758]: I0130 08:50:24.745255 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6bcffb56d9-w524k" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.122907 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152111 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152204 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152243 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152271 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152337 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.152447 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") pod \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\" (UID: \"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3\") " Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.154199 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.169073 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh" (OuterVolumeSpecName: "kube-api-access-rvklh") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "kube-api-access-rvklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.179993 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts" (OuterVolumeSpecName: "scripts") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.192666 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.240963 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254611 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvklh\" (UniqueName: \"kubernetes.io/projected/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-kube-api-access-rvklh\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254644 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254665 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254676 4758 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.254686 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.341361 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9c5c6c655-zfgmj"] Jan 30 08:50:25 crc kubenswrapper[4758]: E0130 08:50:25.341798 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" containerName="cinder-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.341811 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" containerName="cinder-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: E0130 08:50:25.341830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" containerName="barbican-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.341836 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" containerName="barbican-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.342014 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" containerName="barbican-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.342049 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" containerName="cinder-db-sync" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.344695 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.366840 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.366920 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db351db3-71c9-4b03-98b9-68da68f45f14-logs\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.366959 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-combined-ca-bundle\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.367081 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data-custom\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.367114 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rtsq\" (UniqueName: \"kubernetes.io/projected/db351db3-71c9-4b03-98b9-68da68f45f14-kube-api-access-9rtsq\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.383254 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.384306 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4bspq" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.384418 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.390592 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data" (OuterVolumeSpecName: "config-data") pod "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" (UID: "25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.407639 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9c5c6c655-zfgmj"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.457022 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5b64f54b54-68xdf"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.481748 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b64f54b54-68xdf"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.481784 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data-custom\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.481971 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtsq\" (UniqueName: \"kubernetes.io/projected/db351db3-71c9-4b03-98b9-68da68f45f14-kube-api-access-9rtsq\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.482093 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.482208 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db351db3-71c9-4b03-98b9-68da68f45f14-logs\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.482281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-combined-ca-bundle\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.482386 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.481891 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.489589 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db351db3-71c9-4b03-98b9-68da68f45f14-logs\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.500535 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.507977 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data-custom\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.531504 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-config-data\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.531632 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db351db3-71c9-4b03-98b9-68da68f45f14-combined-ca-bundle\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.537858 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtsq\" (UniqueName: \"kubernetes.io/projected/db351db3-71c9-4b03-98b9-68da68f45f14-kube-api-access-9rtsq\") pod \"barbican-worker-9c5c6c655-zfgmj\" (UID: \"db351db3-71c9-4b03-98b9-68da68f45f14\") " pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.537927 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.541418 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.555352 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.566063 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x2v6d" event={"ID":"25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3","Type":"ContainerDied","Data":"3de4a4bfe0a1b2d053131f232fdec07f1fa0eaa08093b340deb9c2fbfcbc2d4c"} Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.566103 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de4a4bfe0a1b2d053131f232fdec07f1fa0eaa08093b340deb9c2fbfcbc2d4c" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.566162 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x2v6d" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583730 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583781 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-logs\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583801 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583840 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583855 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrdwj\" (UniqueName: \"kubernetes.io/projected/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-kube-api-access-rrdwj\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583872 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-combined-ca-bundle\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583906 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.583945 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.584000 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data-custom\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.584016 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688358 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-logs\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688402 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688420 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-combined-ca-bundle\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688437 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrdwj\" (UniqueName: \"kubernetes.io/projected/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-kube-api-access-rrdwj\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688476 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688513 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688587 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data-custom\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.688605 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.694818 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.698803 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.699410 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.700199 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-logs\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.713727 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-combined-ca-bundle\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.717215 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data-custom\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.717927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.718302 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-config-data\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.718422 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrdwj\" (UniqueName: \"kubernetes.io/projected/dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f-kube-api-access-rrdwj\") pod \"barbican-keystone-listener-5b64f54b54-68xdf\" (UID: \"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f\") " pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.737338 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9c5c6c655-zfgmj" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.740024 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") pod \"dnsmasq-dns-798d46d59c-stp4l\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.851976 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.940688 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.966192 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.976922 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.977704 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:25 crc kubenswrapper[4758]: I0130 08:50:25.981231 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.034148 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.048613 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.104965 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.105418 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.105442 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.105471 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.105486 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.234481 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.234774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.234900 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.234981 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.235115 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.235924 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.242285 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.252674 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.264832 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.271614 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") pod \"barbican-api-78f5565ffd-7fzt7\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.355830 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.429363 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.507115 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f9c8c6ff5-f2sb7"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.535537 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.551539 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.622886 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.652003 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.652223 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.652858 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.653001 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.653183 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.684507 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.689204 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701253 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerStarted","Data":"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db"} Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701439 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-central-agent" containerID="cri-o://714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" gracePeriod=30 Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701696 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701748 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" containerID="cri-o://5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" gracePeriod=30 Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701790 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="sg-core" containerID="cri-o://c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" gracePeriod=30 Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.701825 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-notification-agent" containerID="cri-o://35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" gracePeriod=30 Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.747250 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-89x58" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.747565 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.747602 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.747580 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756219 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756327 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756384 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756429 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.756469 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.757528 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.760853 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.761380 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.763216 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.859704 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860116 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860166 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860242 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.860261 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.863255 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.885389 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") pod \"dnsmasq-dns-77c9c856fc-frscq\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.913750 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9c5c6c655-zfgmj"] Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.925389 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.207736863 podStartE2EDuration="1m10.925361464s" podCreationTimestamp="2026-01-30 08:49:16 +0000 UTC" firstStartedPulling="2026-01-30 08:49:21.280670352 +0000 UTC m=+1166.252981903" lastFinishedPulling="2026-01-30 08:50:24.998294953 +0000 UTC m=+1229.970606504" observedRunningTime="2026-01-30 08:50:26.901109013 +0000 UTC m=+1231.873420564" watchObservedRunningTime="2026-01-30 08:50:26.925361464 +0000 UTC m=+1231.897673015" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966099 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966205 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966269 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.966325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.967005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.979868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:26 crc kubenswrapper[4758]: I0130 08:50:26.985304 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.004749 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.011547 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.017782 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.044303 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.095988 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.097589 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.120998 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.121340 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.136210 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.178775 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179221 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179270 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179296 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179357 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179388 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.179417 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281535 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281600 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281635 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281702 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281731 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.281751 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.282220 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.291921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.314245 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.315791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.315791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.320022 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.333979 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") pod \"cinder-api-0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.432764 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.497705 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.711952 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b64f54b54-68xdf"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.844503 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" event={"ID":"ed572628-0a60-4b3b-a441-d352abdd1973","Type":"ContainerStarted","Data":"07d78efd8cc8419582756034bb64dab8efc70ca16dace7b8b82c11dfca81ce07"} Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.844988 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f9c8c6ff5-f2sb7" event={"ID":"0bacb926-f58c-4c06-870a-633b7a3795c5","Type":"ContainerStarted","Data":"e8886208e64939d802a6a132e831c1b307211d0b8aa24f58bb02fa9b339da58e"} Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.852538 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:27 crc kubenswrapper[4758]: I0130 08:50:27.873380 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9c5c6c655-zfgmj" event={"ID":"db351db3-71c9-4b03-98b9-68da68f45f14","Type":"ContainerStarted","Data":"4468ed31af8ae5e154cc77f22fd571e961b8cb309cc56d0ad78a50dfe4f053e3"} Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.196802 4758 generic.go:334] "Generic (PLEG): container finished" podID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerID="c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" exitCode=2 Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.197166 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96"} Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.231129 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.500852 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:28 crc kubenswrapper[4758]: I0130 08:50:28.807529 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.222249 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerStarted","Data":"90b168ab74f0566ec17e5d43d386b2fcb8eeb79a45036e7170359c844ac4189c"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.230442 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerStarted","Data":"34dd11d55e470d260388ede08e80598aeb8947a7588a438590094ab2a374b774"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.237448 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerStarted","Data":"6d3908fc7a3657c2e0650ef09a082b68d8f4744d5e8a7add60e40d22f8b4f455"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.278503 4758 generic.go:334] "Generic (PLEG): container finished" podID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerID="0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b" exitCode=0 Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.278639 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerDied","Data":"0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.319367 4758 generic.go:334] "Generic (PLEG): container finished" podID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerID="35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" exitCode=0 Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.319408 4758 generic.go:334] "Generic (PLEG): container finished" podID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerID="714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" exitCode=0 Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.319476 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.319514 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.335913 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f9c8c6ff5-f2sb7" event={"ID":"0bacb926-f58c-4c06-870a-633b7a3795c5","Type":"ContainerStarted","Data":"94895509e9247171353ccb7b6c99b7f3ecb3c377484b617d31e0f4c5150bc05d"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.375709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerStarted","Data":"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.375792 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerStarted","Data":"42842348e2efe75cc2feccd0c07f7b19138f408619623716e0521afb772faf84"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.378922 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" event={"ID":"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f","Type":"ContainerStarted","Data":"1a200f221ada47d4363852254c7d1c361330f7347b02e22a0000e9d5cd071b19"} Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.602265 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621006 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621122 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621197 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621294 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621342 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621533 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.621642 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") pod \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\" (UID: \"bf67f1ce-88a8-4255-b067-6f6a001ec6b3\") " Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.669633 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.723623 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw" (OuterVolumeSpecName: "kube-api-access-rkxtw") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "kube-api-access-rkxtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.724519 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.724560 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkxtw\" (UniqueName: \"kubernetes.io/projected/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-kube-api-access-rkxtw\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.896628 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.911174 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config" (OuterVolumeSpecName: "config") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.915785 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.915868 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.937362 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.938774 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.938807 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.938819 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.938831 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:29 crc kubenswrapper[4758]: I0130 08:50:29.990829 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bf67f1ce-88a8-4255-b067-6f6a001ec6b3" (UID: "bf67f1ce-88a8-4255-b067-6f6a001ec6b3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.040748 4758 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf67f1ce-88a8-4255-b067-6f6a001ec6b3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.395390 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerID="0fad6fa55bd6734af8e520f919028effe9343fb8ce554bc0f6cdb0c2e748ee42" exitCode=0 Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.395500 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerDied","Data":"0fad6fa55bd6734af8e520f919028effe9343fb8ce554bc0f6cdb0c2e748ee42"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.410308 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bcffb56d9-w524k" event={"ID":"bf67f1ce-88a8-4255-b067-6f6a001ec6b3","Type":"ContainerDied","Data":"7e2b0cfeefa2f941937553b5eb918279c40a00098408904e6b1021d27662c44b"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.410396 4758 scope.go:117] "RemoveContainer" containerID="5417adcb1a97f55a1325215740e4c7309f2a09d3b1b9b49830ac1bd56cdf3dd4" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.410652 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bcffb56d9-w524k" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.431443 4758 generic.go:334] "Generic (PLEG): container finished" podID="ed572628-0a60-4b3b-a441-d352abdd1973" containerID="8d73003660a11a0b4fdfcf6f4814fd21ff002ef41709a84740c8884f6ddb8363" exitCode=0 Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.431847 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" event={"ID":"ed572628-0a60-4b3b-a441-d352abdd1973","Type":"ContainerDied","Data":"8d73003660a11a0b4fdfcf6f4814fd21ff002ef41709a84740c8884f6ddb8363"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.482675 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f9c8c6ff5-f2sb7" event={"ID":"0bacb926-f58c-4c06-870a-633b7a3795c5","Type":"ContainerStarted","Data":"a11f21054e7e9d09710ffb5c7dae2de1b641670c33364526fad392037e11b67b"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.483664 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.504641 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerStarted","Data":"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb"} Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.505535 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.505976 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.529090 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.565974 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6bcffb56d9-w524k"] Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.568920 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f9c8c6ff5-f2sb7" podStartSLOduration=7.568905133 podStartE2EDuration="7.568905133s" podCreationTimestamp="2026-01-30 08:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:30.559742613 +0000 UTC m=+1235.532054164" watchObservedRunningTime="2026-01-30 08:50:30.568905133 +0000 UTC m=+1235.541216684" Jan 30 08:50:30 crc kubenswrapper[4758]: I0130 08:50:30.613452 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78f5565ffd-7fzt7" podStartSLOduration=5.613420528 podStartE2EDuration="5.613420528s" podCreationTimestamp="2026-01-30 08:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:30.604417672 +0000 UTC m=+1235.576729223" watchObservedRunningTime="2026-01-30 08:50:30.613420528 +0000 UTC m=+1235.585732079" Jan 30 08:50:31 crc kubenswrapper[4758]: I0130 08:50:31.522694 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerStarted","Data":"cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362"} Jan 30 08:50:31 crc kubenswrapper[4758]: I0130 08:50:31.795201 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" path="/var/lib/kubelet/pods/bf67f1ce-88a8-4255-b067-6f6a001ec6b3/volumes" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.252964 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.348961 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.349046 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.349317 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.349372 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.349405 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") pod \"ed572628-0a60-4b3b-a441-d352abdd1973\" (UID: \"ed572628-0a60-4b3b-a441-d352abdd1973\") " Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.364543 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp" (OuterVolumeSpecName: "kube-api-access-dzgkp") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "kube-api-access-dzgkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.405449 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.411636 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.439732 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config" (OuterVolumeSpecName: "config") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.444407 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed572628-0a60-4b3b-a441-d352abdd1973" (UID: "ed572628-0a60-4b3b-a441-d352abdd1973"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457606 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457659 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457676 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457692 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgkp\" (UniqueName: \"kubernetes.io/projected/ed572628-0a60-4b3b-a441-d352abdd1973-kube-api-access-dzgkp\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.457708 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed572628-0a60-4b3b-a441-d352abdd1973-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.552953 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" event={"ID":"ed572628-0a60-4b3b-a441-d352abdd1973","Type":"ContainerDied","Data":"07d78efd8cc8419582756034bb64dab8efc70ca16dace7b8b82c11dfca81ce07"} Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.553120 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798d46d59c-stp4l" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.618948 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.637367 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-798d46d59c-stp4l"] Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.731342 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-856f46cdd-mkt57"] Jan 30 08:50:32 crc kubenswrapper[4758]: E0130 08:50:32.737916 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed572628-0a60-4b3b-a441-d352abdd1973" containerName="init" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.738216 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed572628-0a60-4b3b-a441-d352abdd1973" containerName="init" Jan 30 08:50:32 crc kubenswrapper[4758]: E0130 08:50:32.738351 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.738447 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" Jan 30 08:50:32 crc kubenswrapper[4758]: E0130 08:50:32.738565 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-api" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.738650 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-api" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.739067 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed572628-0a60-4b3b-a441-d352abdd1973" containerName="init" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.739166 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-httpd" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.739261 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf67f1ce-88a8-4255-b067-6f6a001ec6b3" containerName="neutron-api" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.740636 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.745131 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.745226 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.753898 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-856f46cdd-mkt57"] Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874222 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnt9j\" (UniqueName: \"kubernetes.io/projected/e33c3e33-3106-483e-bdba-400a2911ff27-kube-api-access-hnt9j\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874331 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-combined-ca-bundle\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-internal-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874795 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-public-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.874861 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33c3e33-3106-483e-bdba-400a2911ff27-logs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.875001 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data-custom\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976525 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-combined-ca-bundle\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976663 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-internal-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976847 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-public-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.976963 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33c3e33-3106-483e-bdba-400a2911ff27-logs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.977072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data-custom\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.977248 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnt9j\" (UniqueName: \"kubernetes.io/projected/e33c3e33-3106-483e-bdba-400a2911ff27-kube-api-access-hnt9j\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.977736 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33c3e33-3106-483e-bdba-400a2911ff27-logs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.985146 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.985627 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-public-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.988700 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-combined-ca-bundle\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.990304 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-config-data-custom\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.993809 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33c3e33-3106-483e-bdba-400a2911ff27-internal-tls-certs\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:32 crc kubenswrapper[4758]: I0130 08:50:32.996575 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnt9j\" (UniqueName: \"kubernetes.io/projected/e33c3e33-3106-483e-bdba-400a2911ff27-kube-api-access-hnt9j\") pod \"barbican-api-856f46cdd-mkt57\" (UID: \"e33c3e33-3106-483e-bdba-400a2911ff27\") " pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.061488 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.082388 4758 scope.go:117] "RemoveContainer" containerID="0344ce2026c43ed6ac12af127c29eb8e1de75869c4b56344c9c8d96774d64c6b" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.127333 4758 scope.go:117] "RemoveContainer" containerID="8d73003660a11a0b4fdfcf6f4814fd21ff002ef41709a84740c8884f6ddb8363" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.701953 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.745281 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" podStartSLOduration=7.745258956 podStartE2EDuration="7.745258956s" podCreationTimestamp="2026-01-30 08:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:33.733870384 +0000 UTC m=+1238.706181945" watchObservedRunningTime="2026-01-30 08:50:33.745258956 +0000 UTC m=+1238.717570507" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.853780 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed572628-0a60-4b3b-a441-d352abdd1973" path="/var/lib/kubelet/pods/ed572628-0a60-4b3b-a441-d352abdd1973/volumes" Jan 30 08:50:33 crc kubenswrapper[4758]: I0130 08:50:33.953719 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-856f46cdd-mkt57"] Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.747562 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerStarted","Data":"516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.757405 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" event={"ID":"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f","Type":"ContainerStarted","Data":"43ac1706b412ac604dd8819c1e99c31a2e0a290f8ab564316a56e3fdf868d108"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.757686 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" event={"ID":"dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f","Type":"ContainerStarted","Data":"8a04314bf8430d5d7c2e771ad77209d299ca17a3ab7ef8eace9464a8971ebcc1"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.764682 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-856f46cdd-mkt57" event={"ID":"e33c3e33-3106-483e-bdba-400a2911ff27","Type":"ContainerStarted","Data":"2a1f9a134963a8ae7359c908fd6dd76f9dfcbb29ae3a680b1d8a49a73a2042cc"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.764738 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-856f46cdd-mkt57" event={"ID":"e33c3e33-3106-483e-bdba-400a2911ff27","Type":"ContainerStarted","Data":"fb8710a11cadf2278c15d8ea07424df193840bb6f5cd337fdbdae7feeb6e0d7d"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.784794 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerStarted","Data":"b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.785027 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api-log" containerID="cri-o://cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362" gracePeriod=30 Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.785390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.785429 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" containerID="cri-o://b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643" gracePeriod=30 Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.799209 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9c5c6c655-zfgmj" event={"ID":"db351db3-71c9-4b03-98b9-68da68f45f14","Type":"ContainerStarted","Data":"c73e3e6f184a18e010d163cc713f26908d4fcd3144544fba2c833abfe75dc561"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.799263 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9c5c6c655-zfgmj" event={"ID":"db351db3-71c9-4b03-98b9-68da68f45f14","Type":"ContainerStarted","Data":"0aa7e0241f1bc372b3cfa866dfd90c64c33e15edc3083f4e5daede07f64075ef"} Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.836215 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5b64f54b54-68xdf" podStartSLOduration=4.638685501 podStartE2EDuration="9.836194395s" podCreationTimestamp="2026-01-30 08:50:25 +0000 UTC" firstStartedPulling="2026-01-30 08:50:27.9649145 +0000 UTC m=+1232.937226041" lastFinishedPulling="2026-01-30 08:50:33.162423384 +0000 UTC m=+1238.134734935" observedRunningTime="2026-01-30 08:50:34.789116379 +0000 UTC m=+1239.761427930" watchObservedRunningTime="2026-01-30 08:50:34.836194395 +0000 UTC m=+1239.808505946" Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.877537 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.877515358 podStartE2EDuration="7.877515358s" podCreationTimestamp="2026-01-30 08:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:34.824774392 +0000 UTC m=+1239.797085943" watchObservedRunningTime="2026-01-30 08:50:34.877515358 +0000 UTC m=+1239.849826909" Jan 30 08:50:34 crc kubenswrapper[4758]: I0130 08:50:34.879840 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9c5c6c655-zfgmj" podStartSLOduration=3.670361718 podStartE2EDuration="9.879825121s" podCreationTimestamp="2026-01-30 08:50:25 +0000 UTC" firstStartedPulling="2026-01-30 08:50:26.932641605 +0000 UTC m=+1231.904953156" lastFinishedPulling="2026-01-30 08:50:33.142105008 +0000 UTC m=+1238.114416559" observedRunningTime="2026-01-30 08:50:34.854502558 +0000 UTC m=+1239.826814109" watchObservedRunningTime="2026-01-30 08:50:34.879825121 +0000 UTC m=+1239.852136672" Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.810445 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerStarted","Data":"5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531"} Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.813478 4758 generic.go:334] "Generic (PLEG): container finished" podID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerID="cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362" exitCode=143 Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.813565 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerDied","Data":"cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362"} Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.816221 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-856f46cdd-mkt57" event={"ID":"e33c3e33-3106-483e-bdba-400a2911ff27","Type":"ContainerStarted","Data":"43fe8835acf263bba3d4baaefd72c6d009a74897d66eda20dcfc35cc50e978c6"} Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.817887 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:35 crc kubenswrapper[4758]: I0130 08:50:35.912343 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-856f46cdd-mkt57" podStartSLOduration=3.912314774 podStartE2EDuration="3.912314774s" podCreationTimestamp="2026-01-30 08:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:35.85902107 +0000 UTC m=+1240.831332641" watchObservedRunningTime="2026-01-30 08:50:35.912314774 +0000 UTC m=+1240.884626325" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.011808 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.011937 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.013097 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a"} pod="openstack/horizon-76fc974bd8-4mnvj" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.013146 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" containerID="cri-o://5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a" gracePeriod=30 Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.044444 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.044546 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.045857 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27"} pod="openstack/horizon-5cf698bb7b-gp87v" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.045922 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" containerID="cri-o://33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27" gracePeriod=30 Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.828435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerStarted","Data":"fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b"} Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.828503 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:36 crc kubenswrapper[4758]: I0130 08:50:36.862578 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.330832156 podStartE2EDuration="10.862555202s" podCreationTimestamp="2026-01-30 08:50:26 +0000 UTC" firstStartedPulling="2026-01-30 08:50:28.740403054 +0000 UTC m=+1233.712714605" lastFinishedPulling="2026-01-30 08:50:33.2721261 +0000 UTC m=+1238.244437651" observedRunningTime="2026-01-30 08:50:36.854153265 +0000 UTC m=+1241.826464836" watchObservedRunningTime="2026-01-30 08:50:36.862555202 +0000 UTC m=+1241.834866763" Jan 30 08:50:37 crc kubenswrapper[4758]: I0130 08:50:37.122715 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.012825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.206467 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.398276 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.495499 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-75649bd464-bvxps"] Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.497371 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.520282 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75649bd464-bvxps"] Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572461 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca89048c-91af-4732-8ef8-24da4618ccf9-logs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572516 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-public-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572565 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-combined-ca-bundle\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572601 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-internal-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572621 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-config-data\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572668 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-scripts\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.572694 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2xw\" (UniqueName: \"kubernetes.io/projected/ca89048c-91af-4732-8ef8-24da4618ccf9-kube-api-access-vf2xw\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca89048c-91af-4732-8ef8-24da4618ccf9-logs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-public-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674328 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-combined-ca-bundle\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674364 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-internal-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674383 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-config-data\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674433 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-scripts\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674463 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf2xw\" (UniqueName: \"kubernetes.io/projected/ca89048c-91af-4732-8ef8-24da4618ccf9-kube-api-access-vf2xw\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.674852 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca89048c-91af-4732-8ef8-24da4618ccf9-logs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.693456 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-internal-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.700515 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-public-tls-certs\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.717618 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-config-data\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.717671 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-scripts\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.719181 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca89048c-91af-4732-8ef8-24da4618ccf9-combined-ca-bundle\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.725631 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf2xw\" (UniqueName: \"kubernetes.io/projected/ca89048c-91af-4732-8ef8-24da4618ccf9-kube-api-access-vf2xw\") pod \"placement-75649bd464-bvxps\" (UID: \"ca89048c-91af-4732-8ef8-24da4618ccf9\") " pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:40 crc kubenswrapper[4758]: I0130 08:50:40.827418 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.439287 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.439635 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.523474 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75649bd464-bvxps"] Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.915916 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75649bd464-bvxps" event={"ID":"ca89048c-91af-4732-8ef8-24da4618ccf9","Type":"ContainerStarted","Data":"e406e4cffc95e5b37a40c8c543dac09c9bb3a88880092f7b275690ecba191d1f"} Jan 30 08:50:41 crc kubenswrapper[4758]: I0130 08:50:41.916591 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75649bd464-bvxps" event={"ID":"ca89048c-91af-4732-8ef8-24da4618ccf9","Type":"ContainerStarted","Data":"5de66c700ca69441159acd4761f559bd5b2fb7a93d262443fd005a85d102bd83"} Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.047270 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.131054 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.131369 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" containerID="cri-o://f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" gracePeriod=10 Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.839198 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.906308 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935673 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935775 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935804 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935892 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.935953 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") pod \"062c9394-cb5b-4768-b71f-2965c61905b8\" (UID: \"062c9394-cb5b-4768-b71f-2965c61905b8\") " Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.985055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75649bd464-bvxps" event={"ID":"ca89048c-91af-4732-8ef8-24da4618ccf9","Type":"ContainerStarted","Data":"ad16d4424ac2a9447c86d0ee2e465b3eaa540f53dabcaf087f4315d70594023e"} Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.986183 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.986234 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:50:42 crc kubenswrapper[4758]: I0130 08:50:42.994391 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm" (OuterVolumeSpecName: "kube-api-access-drwmm") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "kube-api-access-drwmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.020014 4758 generic.go:334] "Generic (PLEG): container finished" podID="062c9394-cb5b-4768-b71f-2965c61905b8" containerID="f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" exitCode=0 Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.022607 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.022652 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerDied","Data":"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b"} Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.083417 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" event={"ID":"062c9394-cb5b-4768-b71f-2965c61905b8","Type":"ContainerDied","Data":"b5a75a63374b225cb63a7afd682c9ae7eaaa46ad8ba68384e44dabbcfac3a2df"} Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.083463 4758 scope.go:117] "RemoveContainer" containerID="f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.065787 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drwmm\" (UniqueName: \"kubernetes.io/projected/062c9394-cb5b-4768-b71f-2965c61905b8-kube-api-access-drwmm\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.083841 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-75649bd464-bvxps" podStartSLOduration=3.083816909 podStartE2EDuration="3.083816909s" podCreationTimestamp="2026-01-30 08:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:43.038818729 +0000 UTC m=+1248.011130300" watchObservedRunningTime="2026-01-30 08:50:43.083816909 +0000 UTC m=+1248.056128460" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.127273 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config" (OuterVolumeSpecName: "config") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.173578 4758 scope.go:117] "RemoveContainer" containerID="c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.185103 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.216247 4758 scope.go:117] "RemoveContainer" containerID="f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" Jan 30 08:50:43 crc kubenswrapper[4758]: E0130 08:50:43.227258 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b\": container with ID starting with f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b not found: ID does not exist" containerID="f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.227307 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b"} err="failed to get container status \"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b\": rpc error: code = NotFound desc = could not find container \"f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b\": container with ID starting with f410784165d7135ab79cf68d80135e8fd36f655963e6aac4daefa6f4bfc0f45b not found: ID does not exist" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.227334 4758 scope.go:117] "RemoveContainer" containerID="c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.227604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: E0130 08:50:43.228131 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035\": container with ID starting with c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035 not found: ID does not exist" containerID="c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.228166 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035"} err="failed to get container status \"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035\": rpc error: code = NotFound desc = could not find container \"c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035\": container with ID starting with c1a1387a0cc6ab137e2c5725fc8ea94d38702c86be1ad3e42304ab099f98a035 not found: ID does not exist" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.257950 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.265509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "062c9394-cb5b-4768-b71f-2965c61905b8" (UID: "062c9394-cb5b-4768-b71f-2965c61905b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.286460 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.286674 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.286765 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/062c9394-cb5b-4768-b71f-2965c61905b8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.414784 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.426716 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-d5b7q"] Jan 30 08:50:43 crc kubenswrapper[4758]: I0130 08:50:43.779404 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" path="/var/lib/kubelet/pods/062c9394-cb5b-4768-b71f-2965c61905b8/volumes" Jan 30 08:50:44 crc kubenswrapper[4758]: I0130 08:50:44.069329 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-856f46cdd-mkt57" podUID="e33c3e33-3106-483e-bdba-400a2911ff27" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:45 crc kubenswrapper[4758]: I0130 08:50:45.231250 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:45 crc kubenswrapper[4758]: I0130 08:50:45.440522 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:46 crc kubenswrapper[4758]: I0130 08:50:46.544450 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:46 crc kubenswrapper[4758]: I0130 08:50:46.546119 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:46 crc kubenswrapper[4758]: I0130 08:50:46.566813 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:46 crc kubenswrapper[4758]: I0130 08:50:46.567633 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.011514 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.142588 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.205222 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.299328 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.425741 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-856f46cdd-mkt57" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.485911 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.503809 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.584387 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b6c948c7-d5b7q" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Jan 30 08:50:47 crc kubenswrapper[4758]: I0130 08:50:47.922405 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-546cd7df57-wnwgz" Jan 30 08:50:48 crc kubenswrapper[4758]: I0130 08:50:48.123367 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" containerID="cri-o://f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" gracePeriod=30 Jan 30 08:50:48 crc kubenswrapper[4758]: I0130 08:50:48.123488 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" containerID="cri-o://a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" gracePeriod=30 Jan 30 08:50:48 crc kubenswrapper[4758]: I0130 08:50:48.123577 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" containerID="cri-o://5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531" gracePeriod=30 Jan 30 08:50:48 crc kubenswrapper[4758]: I0130 08:50:48.123895 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="probe" containerID="cri-o://fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b" gracePeriod=30 Jan 30 08:50:49 crc kubenswrapper[4758]: I0130 08:50:49.135961 4758 generic.go:334] "Generic (PLEG): container finished" podID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerID="f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" exitCode=143 Jan 30 08:50:49 crc kubenswrapper[4758]: I0130 08:50:49.136078 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerDied","Data":"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449"} Jan 30 08:50:49 crc kubenswrapper[4758]: I0130 08:50:49.140218 4758 generic.go:334] "Generic (PLEG): container finished" podID="96e4936b-1f93-4777-a6a9-a13172cba649" containerID="fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b" exitCode=0 Jan 30 08:50:49 crc kubenswrapper[4758]: I0130 08:50:49.140281 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerDied","Data":"fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b"} Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.152929 4758 generic.go:334] "Generic (PLEG): container finished" podID="96e4936b-1f93-4777-a6a9-a13172cba649" containerID="5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531" exitCode=0 Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.152981 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerDied","Data":"5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531"} Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.477378 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.548751 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.556103 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.557650 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.557778 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.557925 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.558103 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.558208 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.558704 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") pod \"96e4936b-1f93-4777-a6a9-a13172cba649\" (UID: \"96e4936b-1f93-4777-a6a9-a13172cba649\") " Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.559388 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96e4936b-1f93-4777-a6a9-a13172cba649-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.565509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.569118 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh" (OuterVolumeSpecName: "kube-api-access-b8bqh") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "kube-api-access-b8bqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.591221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts" (OuterVolumeSpecName: "scripts") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.661321 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.661560 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8bqh\" (UniqueName: \"kubernetes.io/projected/96e4936b-1f93-4777-a6a9-a13172cba649-kube-api-access-b8bqh\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.661623 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.705018 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.732133 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data" (OuterVolumeSpecName: "config-data") pod "96e4936b-1f93-4777-a6a9-a13172cba649" (UID: "96e4936b-1f93-4777-a6a9-a13172cba649"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.762883 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:50 crc kubenswrapper[4758]: I0130 08:50:50.762929 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e4936b-1f93-4777-a6a9-a13172cba649-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.164384 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96e4936b-1f93-4777-a6a9-a13172cba649","Type":"ContainerDied","Data":"90b168ab74f0566ec17e5d43d386b2fcb8eeb79a45036e7170359c844ac4189c"} Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.164426 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.164441 4758 scope.go:117] "RemoveContainer" containerID="fce6f3291cba9836db05bb7c0174edd5b7fc39638fb327ec393d8e4db2ced12b" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.190561 4758 scope.go:117] "RemoveContainer" containerID="5d2eb36a855bb9311f78b9807aa198691619a0fa942b8d09db1207b8ae0b2531" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.218899 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.238621 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269227 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.269611 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="init" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269628 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="init" Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.269642 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269648 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.269661 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269667 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.269693 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="probe" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269698 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="probe" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269895 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="062c9394-cb5b-4768-b71f-2965c61905b8" containerName="dnsmasq-dns" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269918 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="cinder-scheduler" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.269938 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" containerName="probe" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.270911 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.272782 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.297922 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.357286 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.357304 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78f5565ffd-7fzt7" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.374559 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.374716 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.374748 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbd50144-fe99-468f-a32b-172996d95ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.374777 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmc2h\" (UniqueName: \"kubernetes.io/projected/fbd50144-fe99-468f-a32b-172996d95ca1-kube-api-access-dmc2h\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.375204 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.375411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.476966 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmc2h\" (UniqueName: \"kubernetes.io/projected/fbd50144-fe99-468f-a32b-172996d95ca1-kube-api-access-dmc2h\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477085 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477313 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477338 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbd50144-fe99-468f-a32b-172996d95ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.477432 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbd50144-fe99-468f-a32b-172996d95ca1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.483292 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-scripts\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.485730 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.489487 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.494938 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd50144-fe99-468f-a32b-172996d95ca1-config-data\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.501023 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmc2h\" (UniqueName: \"kubernetes.io/projected/fbd50144-fe99-468f-a32b-172996d95ca1-kube-api-access-dmc2h\") pod \"cinder-scheduler-0\" (UID: \"fbd50144-fe99-468f-a32b-172996d95ca1\") " pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.592391 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.746478 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.789081 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.789580 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.789619 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.789999 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.790111 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") pod \"6f80a6df-a108-4559-bc45-705f737ce4a1\" (UID: \"6f80a6df-a108-4559-bc45-705f737ce4a1\") " Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.794808 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e4936b-1f93-4777-a6a9-a13172cba649" path="/var/lib/kubelet/pods/96e4936b-1f93-4777-a6a9-a13172cba649/volumes" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.796818 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs" (OuterVolumeSpecName: "logs") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.800597 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w" (OuterVolumeSpecName: "kube-api-access-ggd6w") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "kube-api-access-ggd6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.801449 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.841294 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.904657 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f80a6df-a108-4559-bc45-705f737ce4a1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.904688 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.904706 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.904718 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggd6w\" (UniqueName: \"kubernetes.io/projected/6f80a6df-a108-4559-bc45-705f737ce4a1-kube-api-access-ggd6w\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.924357 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.924899 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.924923 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" Jan 30 08:50:51 crc kubenswrapper[4758]: E0130 08:50:51.924959 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.924970 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.925222 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.925273 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerName="barbican-api-log" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.927225 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.934577 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.936920 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.937235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data" (OuterVolumeSpecName: "config-data") pod "6f80a6df-a108-4559-bc45-705f737ce4a1" (UID: "6f80a6df-a108-4559-bc45-705f737ce4a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.937416 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 08:50:51 crc kubenswrapper[4758]: I0130 08:50:51.937457 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4jqf4" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018387 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018495 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018514 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018547 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.018878 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f80a6df-a108-4559-bc45-705f737ce4a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.120807 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.120960 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.120988 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.121855 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.122875 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.125658 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.125658 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.139843 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") pod \"openstackclient\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.175464 4758 generic.go:334] "Generic (PLEG): container finished" podID="6f80a6df-a108-4559-bc45-705f737ce4a1" containerID="a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" exitCode=0 Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.175589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerDied","Data":"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb"} Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.175710 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78f5565ffd-7fzt7" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.177143 4758 scope.go:117] "RemoveContainer" containerID="a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.177067 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78f5565ffd-7fzt7" event={"ID":"6f80a6df-a108-4559-bc45-705f737ce4a1","Type":"ContainerDied","Data":"42842348e2efe75cc2feccd0c07f7b19138f408619623716e0521afb772faf84"} Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.255735 4758 scope.go:117] "RemoveContainer" containerID="f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.266143 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.316931 4758 scope.go:117] "RemoveContainer" containerID="a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" Jan 30 08:50:52 crc kubenswrapper[4758]: E0130 08:50:52.325259 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb\": container with ID starting with a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb not found: ID does not exist" containerID="a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.325517 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb"} err="failed to get container status \"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb\": rpc error: code = NotFound desc = could not find container \"a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb\": container with ID starting with a05b84be92b64027f48cb551491115c81e3e350ece733c78e0b53fffacb083fb not found: ID does not exist" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.325635 4758 scope.go:117] "RemoveContainer" containerID="f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.328906 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:52 crc kubenswrapper[4758]: E0130 08:50:52.330303 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449\": container with ID starting with f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449 not found: ID does not exist" containerID="f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.330343 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449"} err="failed to get container status \"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449\": rpc error: code = NotFound desc = could not find container \"f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449\": container with ID starting with f7311036534ee1b05d79d9972221eda69035b78bbd803cc1109b84faffd16449 not found: ID does not exist" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.373788 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78f5565ffd-7fzt7"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.408185 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.442931 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.464558 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.476321 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.477953 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.493082 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.535298 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.535671 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.535785 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config-secret\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.535939 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwdgv\" (UniqueName: \"kubernetes.io/projected/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-kube-api-access-zwdgv\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: E0130 08:50:52.580174 4758 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 08:50:52 crc kubenswrapper[4758]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_cd770c38-148a-45c9-bb2c-175504c327f0_0(5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e" Netns:"/var/run/netns/a1907451-6347-43a8-b4e3-3c856c71d622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e;K8S_POD_UID=cd770c38-148a-45c9-bb2c-175504c327f0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/cd770c38-148a-45c9-bb2c-175504c327f0]: expected pod UID "cd770c38-148a-45c9-bb2c-175504c327f0" but got "e81b8de8-1714-4a5d-852a-e61d4bc9cd5d" from Kube API Jan 30 08:50:52 crc kubenswrapper[4758]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 08:50:52 crc kubenswrapper[4758]: > Jan 30 08:50:52 crc kubenswrapper[4758]: E0130 08:50:52.580270 4758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 08:50:52 crc kubenswrapper[4758]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_cd770c38-148a-45c9-bb2c-175504c327f0_0(5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e" Netns:"/var/run/netns/a1907451-6347-43a8-b4e3-3c856c71d622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5ed9d3849529876401c758a17bf267b6fd7fa6f5f7c55f2cfa40325d7da9482e;K8S_POD_UID=cd770c38-148a-45c9-bb2c-175504c327f0" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/cd770c38-148a-45c9-bb2c-175504c327f0]: expected pod UID "cd770c38-148a-45c9-bb2c-175504c327f0" but got "e81b8de8-1714-4a5d-852a-e61d4bc9cd5d" from Kube API Jan 30 08:50:52 crc kubenswrapper[4758]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 08:50:52 crc kubenswrapper[4758]: > pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.638117 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.638222 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.638245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config-secret\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.638281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwdgv\" (UniqueName: \"kubernetes.io/projected/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-kube-api-access-zwdgv\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.639276 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.644697 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.646376 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-openstack-config-secret\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.656846 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwdgv\" (UniqueName: \"kubernetes.io/projected/e81b8de8-1714-4a5d-852a-e61d4bc9cd5d-kube-api-access-zwdgv\") pod \"openstackclient\" (UID: \"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d\") " pod="openstack/openstackclient" Jan 30 08:50:52 crc kubenswrapper[4758]: I0130 08:50:52.806310 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.204705 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.204910 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbd50144-fe99-468f-a32b-172996d95ca1","Type":"ContainerStarted","Data":"23ea4bed019ca1e75838bd0bd12af58918fb6e1073b422dbc8c8f6715618c2e3"} Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.208455 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="cd770c38-148a-45c9-bb2c-175504c327f0" podUID="e81b8de8-1714-4a5d-852a-e61d4bc9cd5d" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.220451 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.368909 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") pod \"cd770c38-148a-45c9-bb2c-175504c327f0\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.369542 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cd770c38-148a-45c9-bb2c-175504c327f0" (UID: "cd770c38-148a-45c9-bb2c-175504c327f0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.369763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") pod \"cd770c38-148a-45c9-bb2c-175504c327f0\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.369888 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") pod \"cd770c38-148a-45c9-bb2c-175504c327f0\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.369916 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") pod \"cd770c38-148a-45c9-bb2c-175504c327f0\" (UID: \"cd770c38-148a-45c9-bb2c-175504c327f0\") " Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.370589 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.378052 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg" (OuterVolumeSpecName: "kube-api-access-l42cg") pod "cd770c38-148a-45c9-bb2c-175504c327f0" (UID: "cd770c38-148a-45c9-bb2c-175504c327f0"). InnerVolumeSpecName "kube-api-access-l42cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.383407 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd770c38-148a-45c9-bb2c-175504c327f0" (UID: "cd770c38-148a-45c9-bb2c-175504c327f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.387190 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cd770c38-148a-45c9-bb2c-175504c327f0" (UID: "cd770c38-148a-45c9-bb2c-175504c327f0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.433143 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 08:50:53 crc kubenswrapper[4758]: W0130 08:50:53.462149 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode81b8de8_1714_4a5d_852a_e61d4bc9cd5d.slice/crio-9a567d25952c83a99d81bd4c23f7410476874653074151d42b621e5aef755865 WatchSource:0}: Error finding container 9a567d25952c83a99d81bd4c23f7410476874653074151d42b621e5aef755865: Status 404 returned error can't find the container with id 9a567d25952c83a99d81bd4c23f7410476874653074151d42b621e5aef755865 Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.473937 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.473975 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l42cg\" (UniqueName: \"kubernetes.io/projected/cd770c38-148a-45c9-bb2c-175504c327f0-kube-api-access-l42cg\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.473985 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd770c38-148a-45c9-bb2c-175504c327f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.583196 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f9c8c6ff5-f2sb7" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.673928 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.674294 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-645778c498-xt8kb" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-api" containerID="cri-o://d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85" gracePeriod=30 Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.674498 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-645778c498-xt8kb" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-httpd" containerID="cri-o://48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17" gracePeriod=30 Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.797845 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f80a6df-a108-4559-bc45-705f737ce4a1" path="/var/lib/kubelet/pods/6f80a6df-a108-4559-bc45-705f737ce4a1/volumes" Jan 30 08:50:53 crc kubenswrapper[4758]: I0130 08:50:53.798678 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd770c38-148a-45c9-bb2c-175504c327f0" path="/var/lib/kubelet/pods/cd770c38-148a-45c9-bb2c-175504c327f0/volumes" Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.218307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d","Type":"ContainerStarted","Data":"9a567d25952c83a99d81bd4c23f7410476874653074151d42b621e5aef755865"} Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.220161 4758 generic.go:334] "Generic (PLEG): container finished" podID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerID="48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17" exitCode=0 Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.220231 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerDied","Data":"48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17"} Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.225083 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.225079 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbd50144-fe99-468f-a32b-172996d95ca1","Type":"ContainerStarted","Data":"993ef9cdf28dfec08b69082acb342cbf7d7f5eac2b2c57077d9ac7d6e5d613a7"} Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.225163 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fbd50144-fe99-468f-a32b-172996d95ca1","Type":"ContainerStarted","Data":"ed6bab1d35aad2e4c19c2327c2665d4d0a64333e80008977623a6e72f73a205b"} Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.267469 4758 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="cd770c38-148a-45c9-bb2c-175504c327f0" podUID="e81b8de8-1714-4a5d-852a-e61d4bc9cd5d" Jan 30 08:50:54 crc kubenswrapper[4758]: I0130 08:50:54.268852 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.268823982 podStartE2EDuration="3.268823982s" podCreationTimestamp="2026-01-30 08:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:50:54.258579256 +0000 UTC m=+1259.230890817" watchObservedRunningTime="2026-01-30 08:50:54.268823982 +0000 UTC m=+1259.241135543" Jan 30 08:50:56 crc kubenswrapper[4758]: I0130 08:50:56.593386 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.242712 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263149 4758 generic.go:334] "Generic (PLEG): container finished" podID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerID="5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" exitCode=137 Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263354 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db"} Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263432 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68","Type":"ContainerDied","Data":"c0607f79147011ac217f4da35286aea41ac3abd388dd413c6012b3dbc3df6b9a"} Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263536 4758 scope.go:117] "RemoveContainer" containerID="5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.263737 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.303608 4758 scope.go:117] "RemoveContainer" containerID="c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.357149 4758 scope.go:117] "RemoveContainer" containerID="35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379429 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379510 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379541 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379592 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379659 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379674 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.379713 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") pod \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\" (UID: \"78d72e86-4fa0-457b-a4c0-a9b1fc92fb68\") " Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.380327 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.392549 4758 scope.go:117] "RemoveContainer" containerID="714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.393688 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts" (OuterVolumeSpecName: "scripts") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.401259 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh" (OuterVolumeSpecName: "kube-api-access-dsgzh") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "kube-api-access-dsgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.427910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.442080 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.481680 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgzh\" (UniqueName: \"kubernetes.io/projected/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-kube-api-access-dsgzh\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.481875 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.481943 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.482007 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.482081 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.515888 4758 scope.go:117] "RemoveContainer" containerID="5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.518245 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db\": container with ID starting with 5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db not found: ID does not exist" containerID="5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.518288 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db"} err="failed to get container status \"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db\": rpc error: code = NotFound desc = could not find container \"5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db\": container with ID starting with 5e467b02099c2c9b33c477af0a54403130d0d70af7306b3410adcd18154df2db not found: ID does not exist" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.518312 4758 scope.go:117] "RemoveContainer" containerID="c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.521133 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96\": container with ID starting with c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96 not found: ID does not exist" containerID="c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.521166 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96"} err="failed to get container status \"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96\": rpc error: code = NotFound desc = could not find container \"c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96\": container with ID starting with c9e000d0201505057da487e30e0718d542b0cee64178268987cd689d4c228d96 not found: ID does not exist" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.521183 4758 scope.go:117] "RemoveContainer" containerID="35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.521819 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8\": container with ID starting with 35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8 not found: ID does not exist" containerID="35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.521851 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8"} err="failed to get container status \"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8\": rpc error: code = NotFound desc = could not find container \"35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8\": container with ID starting with 35f3017863b13b1a9900b9e03975dcc393215b04266ce33e962fae23117f3cc8 not found: ID does not exist" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.521866 4758 scope.go:117] "RemoveContainer" containerID="714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.522844 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340\": container with ID starting with 714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340 not found: ID does not exist" containerID="714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.522873 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340"} err="failed to get container status \"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340\": rpc error: code = NotFound desc = could not find container \"714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340\": container with ID starting with 714139ef3262dae231c95a2838b8771b662566e476bbc92778563e01bdd7c340 not found: ID does not exist" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.548332 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.558097 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data" (OuterVolumeSpecName: "config-data") pod "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" (UID: "78d72e86-4fa0-457b-a4c0-a9b1fc92fb68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.585278 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.585308 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.608871 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.637118 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.661695 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.666365 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-central-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666398 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-central-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.666417 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="sg-core" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666425 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="sg-core" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.666447 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666454 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" Jan 30 08:50:57 crc kubenswrapper[4758]: E0130 08:50:57.666466 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-notification-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666473 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-notification-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666689 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-notification-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666737 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="sg-core" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666756 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="proxy-httpd" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.666768 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" containerName="ceilometer-central-agent" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.668579 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.675067 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.675985 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.676259 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.764633 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-75f5775999-fhl5h"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790563 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790633 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790870 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790937 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.790989 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.791361 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.799305 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.799577 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.799755 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.837707 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d72e86-4fa0-457b-a4c0-a9b1fc92fb68" path="/var/lib/kubelet/pods/78d72e86-4fa0-457b-a4c0-a9b1fc92fb68/volumes" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.838620 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75f5775999-fhl5h"] Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.892862 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894147 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-public-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894281 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894383 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-run-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894465 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-log-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.894759 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.895065 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.895479 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-combined-ca-bundle\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.895891 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.898194 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-internal-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.898341 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-config-data\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.898446 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.898977 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.899124 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.899215 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d87gh\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-kube-api-access-d87gh\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.893548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.901033 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.896898 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.909531 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.916992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.930778 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:57 crc kubenswrapper[4758]: I0130 08:50:57.940399 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " pod="openstack/ceilometer-0" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-public-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000569 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-run-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000585 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-log-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-combined-ca-bundle\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-internal-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000739 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-config-data\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000754 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.000780 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d87gh\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-kube-api-access-d87gh\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.001484 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-run-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.001624 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.001653 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.001691 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:50:58.50167499 +0000 UTC m=+1263.473986541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.001820 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2358e5c-db98-4b7b-8b6c-2e83132655a9-log-httpd\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.009924 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-public-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.015136 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-internal-tls-certs\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.016992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-config-data\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.017253 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2358e5c-db98-4b7b-8b6c-2e83132655a9-combined-ca-bundle\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.022753 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.028201 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d87gh\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-kube-api-access-d87gh\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.297023 4758 generic.go:334] "Generic (PLEG): container finished" podID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerID="d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85" exitCode=0 Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.297200 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerDied","Data":"d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85"} Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.511471 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.512115 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.512133 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: E0130 08:50:58.512178 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:50:59.512162933 +0000 UTC m=+1264.484474484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.613203 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.660777 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717474 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717636 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717672 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717730 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.717756 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.728268 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.728330 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z" (OuterVolumeSpecName: "kube-api-access-qdj7z") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "kube-api-access-qdj7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.779232 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config" (OuterVolumeSpecName: "config") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.790478 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.821219 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.821773 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") pod \"b9e036dd-d20e-4488-8354-7b1079bc8113\" (UID: \"b9e036dd-d20e-4488-8354-7b1079bc8113\") " Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.822994 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.823108 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdj7z\" (UniqueName: \"kubernetes.io/projected/b9e036dd-d20e-4488-8354-7b1079bc8113-kube-api-access-qdj7z\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.823190 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.823270 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:58 crc kubenswrapper[4758]: W0130 08:50:58.822005 4758 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b9e036dd-d20e-4488-8354-7b1079bc8113/volumes/kubernetes.io~secret/ovndb-tls-certs Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.824611 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b9e036dd-d20e-4488-8354-7b1079bc8113" (UID: "b9e036dd-d20e-4488-8354-7b1079bc8113"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:50:58 crc kubenswrapper[4758]: I0130 08:50:58.928153 4758 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9e036dd-d20e-4488-8354-7b1079bc8113-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.308478 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"d712f3f39456f410b004f641e198f9ee1a339ca9d1e26fe4b3178bcb4028fc36"} Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.310749 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-645778c498-xt8kb" event={"ID":"b9e036dd-d20e-4488-8354-7b1079bc8113","Type":"ContainerDied","Data":"e0c5305909e72f305e168967cbd28a761d5403564b6070027fcc64369e555331"} Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.310825 4758 scope.go:117] "RemoveContainer" containerID="48f1de05547b3f44a9bb9e50923cbab0f98f39c1687f67b0d20d6bb8117c5d17" Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.311063 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-645778c498-xt8kb" Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.383280 4758 scope.go:117] "RemoveContainer" containerID="d2f284d1f78a5a02cc7d33f74c6c84ab5179e1ed0d94292246f146a3dbb86c85" Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.410565 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.426338 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-645778c498-xt8kb"] Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.543008 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:50:59 crc kubenswrapper[4758]: E0130 08:50:59.543161 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:50:59 crc kubenswrapper[4758]: E0130 08:50:59.543298 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:50:59 crc kubenswrapper[4758]: E0130 08:50:59.543381 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:51:01.543361734 +0000 UTC m=+1266.515673285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:50:59 crc kubenswrapper[4758]: I0130 08:50:59.781259 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" path="/var/lib/kubelet/pods/b9e036dd-d20e-4488-8354-7b1079bc8113/volumes" Jan 30 08:51:00 crc kubenswrapper[4758]: I0130 08:51:00.328447 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b"} Jan 30 08:51:00 crc kubenswrapper[4758]: I0130 08:51:00.329026 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7"} Jan 30 08:51:01 crc kubenswrapper[4758]: I0130 08:51:01.051659 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:01 crc kubenswrapper[4758]: I0130 08:51:01.601269 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:51:01 crc kubenswrapper[4758]: E0130 08:51:01.601672 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:51:01 crc kubenswrapper[4758]: E0130 08:51:01.601697 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:51:01 crc kubenswrapper[4758]: E0130 08:51:01.601778 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:51:05.601747678 +0000 UTC m=+1270.574059229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:51:01 crc kubenswrapper[4758]: I0130 08:51:01.875824 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 08:51:05 crc kubenswrapper[4758]: I0130 08:51:05.392816 4758 generic.go:334] "Generic (PLEG): container finished" podID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerID="b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643" exitCode=137 Jan 30 08:51:05 crc kubenswrapper[4758]: I0130 08:51:05.392895 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerDied","Data":"b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643"} Jan 30 08:51:05 crc kubenswrapper[4758]: I0130 08:51:05.693758 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:51:05 crc kubenswrapper[4758]: E0130 08:51:05.694034 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:51:05 crc kubenswrapper[4758]: E0130 08:51:05.694070 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:51:05 crc kubenswrapper[4758]: E0130 08:51:05.694126 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:51:13.694105411 +0000 UTC m=+1278.666416962 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:51:06 crc kubenswrapper[4758]: I0130 08:51:06.407584 4758 generic.go:334] "Generic (PLEG): container finished" podID="365b123c-aa7f-464d-b659-78154f86d42f" containerID="5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a" exitCode=137 Jan 30 08:51:06 crc kubenswrapper[4758]: I0130 08:51:06.407748 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a"} Jan 30 08:51:06 crc kubenswrapper[4758]: I0130 08:51:06.412692 4758 generic.go:334] "Generic (PLEG): container finished" podID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerID="33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27" exitCode=137 Jan 30 08:51:06 crc kubenswrapper[4758]: I0130 08:51:06.412734 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerDied","Data":"33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27"} Jan 30 08:51:07 crc kubenswrapper[4758]: I0130 08:51:07.434643 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": dial tcp 10.217.0.165:8776: connect: connection refused" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.439596 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.468096 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150"} Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492296 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492537 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492644 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492774 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.492850 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.493436 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs" (OuterVolumeSpecName: "logs") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.493226 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.495451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") pod \"46b02117-6d35-4cca-8eac-ee772c3916d0\" (UID: \"46b02117-6d35-4cca-8eac-ee772c3916d0\") " Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.495940 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46b02117-6d35-4cca-8eac-ee772c3916d0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.496077 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.498910 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg" (OuterVolumeSpecName: "kube-api-access-9nrlg") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "kube-api-access-9nrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.511709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8"} Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.514054 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.516571 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts" (OuterVolumeSpecName: "scripts") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.523123 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"46b02117-6d35-4cca-8eac-ee772c3916d0","Type":"ContainerDied","Data":"34dd11d55e470d260388ede08e80598aeb8947a7588a438590094ab2a374b774"} Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.523256 4758 scope.go:117] "RemoveContainer" containerID="b61c04acdafa8b1f3c73d0f0ca3fbc902eb1178c23428a7de5775bf0a6dbd643" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.523430 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.558379 4758 scope.go:117] "RemoveContainer" containerID="cdc39ebd14d5fe4c1108b613a24ba8e7099db0c7ab701f775ab1aaaf3a839362" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598186 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrlg\" (UniqueName: \"kubernetes.io/projected/46b02117-6d35-4cca-8eac-ee772c3916d0-kube-api-access-9nrlg\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598218 4758 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598229 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598238 4758 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46b02117-6d35-4cca-8eac-ee772c3916d0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.598612 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.660251 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data" (OuterVolumeSpecName: "config-data") pod "46b02117-6d35-4cca-8eac-ee772c3916d0" (UID: "46b02117-6d35-4cca-8eac-ee772c3916d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.699772 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.699807 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b02117-6d35-4cca-8eac-ee772c3916d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.867110 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.884852 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.923940 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:09 crc kubenswrapper[4758]: E0130 08:51:09.924388 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-httpd" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924407 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-httpd" Jan 30 08:51:09 crc kubenswrapper[4758]: E0130 08:51:09.924422 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api-log" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924428 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api-log" Jan 30 08:51:09 crc kubenswrapper[4758]: E0130 08:51:09.924440 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924446 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" Jan 30 08:51:09 crc kubenswrapper[4758]: E0130 08:51:09.924494 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924501 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924677 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924693 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" containerName="cinder-api-log" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924703 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-httpd" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.924724 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e036dd-d20e-4488-8354-7b1079bc8113" containerName="neutron-api" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.926750 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.928660 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.930431 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.930817 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 08:51:09 crc kubenswrapper[4758]: I0130 08:51:09.935634 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.006393 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d3f5e9-e330-476b-be63-775114f987e6-logs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.006504 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgf8\" (UniqueName: \"kubernetes.io/projected/d6d3f5e9-e330-476b-be63-775114f987e6-kube-api-access-2vgf8\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.006615 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-scripts\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.006802 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007504 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007741 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3f5e9-e330-476b-be63-775114f987e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.007916 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110218 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110274 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3f5e9-e330-476b-be63-775114f987e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110368 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110418 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d3f5e9-e330-476b-be63-775114f987e6-logs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110488 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgf8\" (UniqueName: \"kubernetes.io/projected/d6d3f5e9-e330-476b-be63-775114f987e6-kube-api-access-2vgf8\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110505 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-scripts\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.110559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.112055 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6d3f5e9-e330-476b-be63-775114f987e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.114639 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d3f5e9-e330-476b-be63-775114f987e6-logs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.131494 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.131754 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.131795 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.132439 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.132553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgf8\" (UniqueName: \"kubernetes.io/projected/d6d3f5e9-e330-476b-be63-775114f987e6-kube-api-access-2vgf8\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.133371 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-scripts\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.141239 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d3f5e9-e330-476b-be63-775114f987e6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d6d3f5e9-e330-476b-be63-775114f987e6\") " pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.322511 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.543672 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0"} Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.568226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e81b8de8-1714-4a5d-852a-e61d4bc9cd5d","Type":"ContainerStarted","Data":"b5dde43d054d2e7e158ca882b27107da67d5b7979c13958ff3d9e8a56758f87e"} Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.602446 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.892480091 podStartE2EDuration="18.602423194s" podCreationTimestamp="2026-01-30 08:50:52 +0000 UTC" firstStartedPulling="2026-01-30 08:50:53.479122636 +0000 UTC m=+1258.451434187" lastFinishedPulling="2026-01-30 08:51:09.189065749 +0000 UTC m=+1274.161377290" observedRunningTime="2026-01-30 08:51:10.590114344 +0000 UTC m=+1275.562425895" watchObservedRunningTime="2026-01-30 08:51:10.602423194 +0000 UTC m=+1275.574734745" Jan 30 08:51:10 crc kubenswrapper[4758]: W0130 08:51:10.880307 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6d3f5e9_e330_476b_be63_775114f987e6.slice/crio-33cbcedd4268ec62bd39894d0558ea3e2a562f1ce131c5a210bc60f4a55ec9e7 WatchSource:0}: Error finding container 33cbcedd4268ec62bd39894d0558ea3e2a562f1ce131c5a210bc60f4a55ec9e7: Status 404 returned error can't find the container with id 33cbcedd4268ec62bd39894d0558ea3e2a562f1ce131c5a210bc60f4a55ec9e7 Jan 30 08:51:10 crc kubenswrapper[4758]: I0130 08:51:10.883992 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.582195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6d3f5e9-e330-476b-be63-775114f987e6","Type":"ContainerStarted","Data":"33cbcedd4268ec62bd39894d0558ea3e2a562f1ce131c5a210bc60f4a55ec9e7"} Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.600778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerStarted","Data":"67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5"} Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.601705 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-central-agent" containerID="cri-o://1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7" gracePeriod=30 Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.601847 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.602323 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-notification-agent" containerID="cri-o://056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b" gracePeriod=30 Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.602391 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="sg-core" containerID="cri-o://e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8" gracePeriod=30 Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.602612 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" containerID="cri-o://67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5" gracePeriod=30 Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.653489 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.245507568 podStartE2EDuration="14.653461946s" podCreationTimestamp="2026-01-30 08:50:57 +0000 UTC" firstStartedPulling="2026-01-30 08:50:58.665170535 +0000 UTC m=+1263.637482086" lastFinishedPulling="2026-01-30 08:51:11.073124913 +0000 UTC m=+1276.045436464" observedRunningTime="2026-01-30 08:51:11.635558527 +0000 UTC m=+1276.607870078" watchObservedRunningTime="2026-01-30 08:51:11.653461946 +0000 UTC m=+1276.625773497" Jan 30 08:51:11 crc kubenswrapper[4758]: I0130 08:51:11.784286 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b02117-6d35-4cca-8eac-ee772c3916d0" path="/var/lib/kubelet/pods/46b02117-6d35-4cca-8eac-ee772c3916d0/volumes" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613177 4758 generic.go:334] "Generic (PLEG): container finished" podID="456181b2-373c-4e15-abaf-b35287b20b59" containerID="e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8" exitCode=2 Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613514 4758 generic.go:334] "Generic (PLEG): container finished" podID="456181b2-373c-4e15-abaf-b35287b20b59" containerID="056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b" exitCode=0 Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613524 4758 generic.go:334] "Generic (PLEG): container finished" podID="456181b2-373c-4e15-abaf-b35287b20b59" containerID="1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7" exitCode=0 Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613239 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.613675 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.617364 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6d3f5e9-e330-476b-be63-775114f987e6","Type":"ContainerStarted","Data":"76f5450755f091c8a03cab7443d0eb6e42d855f698a24bfdf17cfe5abfbe0036"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.617406 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6d3f5e9-e330-476b-be63-775114f987e6","Type":"ContainerStarted","Data":"59a4637a4368a6b9b62000c97418aafd3de137b3c50e1699a1b0131a7de36865"} Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.618483 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.659372 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.659353953 podStartE2EDuration="3.659353953s" podCreationTimestamp="2026-01-30 08:51:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:51:12.652594908 +0000 UTC m=+1277.624906459" watchObservedRunningTime="2026-01-30 08:51:12.659353953 +0000 UTC m=+1277.631665504" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.864871 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.866373 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75649bd464-bvxps" Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.984340 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.984631 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6bdfdc4b-wwqnt" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-log" containerID="cri-o://ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19" gracePeriod=30 Jan 30 08:51:12 crc kubenswrapper[4758]: I0130 08:51:12.985104 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6bdfdc4b-wwqnt" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-api" containerID="cri-o://21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7" gracePeriod=30 Jan 30 08:51:13 crc kubenswrapper[4758]: I0130 08:51:13.632374 4758 generic.go:334] "Generic (PLEG): container finished" podID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerID="ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19" exitCode=143 Jan 30 08:51:13 crc kubenswrapper[4758]: I0130 08:51:13.632446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerDied","Data":"ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19"} Jan 30 08:51:13 crc kubenswrapper[4758]: I0130 08:51:13.733654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:51:13 crc kubenswrapper[4758]: E0130 08:51:13.735464 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:51:13 crc kubenswrapper[4758]: E0130 08:51:13.735487 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:51:13 crc kubenswrapper[4758]: E0130 08:51:13.735544 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:51:29.735512192 +0000 UTC m=+1294.707823743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.009477 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.010017 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.043305 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.043358 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.189572 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.190777 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.219725 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.285812 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.285944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.292084 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.295599 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.309348 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.317506 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.325239 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.337261 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.386703 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399287 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399398 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399431 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399597 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.399791 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.400964 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.448398 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.448423 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") pod \"nova-api-db-create-psn47\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.449650 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.473324 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.501629 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.501952 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.501992 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.502066 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.503135 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.503693 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.523582 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.535920 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") pod \"nova-cell0-db-create-8mvw7\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.555489 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") pod \"nova-api-7afc-account-create-update-4gfp5\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.608981 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.609073 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.619303 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.621684 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.628899 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.631566 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.711907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.711960 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.713375 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.713420 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.720887 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.721140 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.741673 4758 generic.go:334] "Generic (PLEG): container finished" podID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerID="21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7" exitCode=0 Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.741767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerDied","Data":"21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7"} Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.742767 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") pod \"nova-cell1-db-create-7kv6r\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.746559 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.816267 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.816332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.817347 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.828822 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.830547 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.840684 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.844461 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") pod \"nova-cell0-4e24-account-create-update-pkqqp\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.846541 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.918499 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.918564 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.921942 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:16 crc kubenswrapper[4758]: I0130 08:51:16.970073 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.021536 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.021588 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.022375 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.036319 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.052125 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") pod \"nova-cell1-649b-account-create-update-fjrbb\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.122734 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.123310 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.123389 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.123438 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.124334 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.124397 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.124439 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") pod \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\" (UID: \"82fd1f36-9f4f-441f-959d-e2eddc79c99b\") " Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.133264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs" (OuterVolumeSpecName: "logs") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.165077 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts" (OuterVolumeSpecName: "scripts") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.165618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l" (OuterVolumeSpecName: "kube-api-access-l8f8l") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "kube-api-access-l8f8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.179453 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.238064 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82fd1f36-9f4f-441f-959d-e2eddc79c99b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.238108 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.238119 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8f8l\" (UniqueName: \"kubernetes.io/projected/82fd1f36-9f4f-441f-959d-e2eddc79c99b-kube-api-access-l8f8l\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.273893 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data" (OuterVolumeSpecName: "config-data") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.343252 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.373402 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.380900 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.444586 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.445067 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: W0130 08:51:17.464417 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfff40ca2_ce20_4c6a_82c7_aa2c5b744ac8.slice/crio-816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a WatchSource:0}: Error finding container 816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a: Status 404 returned error can't find the container with id 816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.504236 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.704049 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "82fd1f36-9f4f-441f-959d-e2eddc79c99b" (UID: "82fd1f36-9f4f-441f-959d-e2eddc79c99b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.761187 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.769247 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82fd1f36-9f4f-441f-959d-e2eddc79c99b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.789515 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bdfdc4b-wwqnt" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.803664 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bdfdc4b-wwqnt" event={"ID":"82fd1f36-9f4f-441f-959d-e2eddc79c99b","Type":"ContainerDied","Data":"d71a899a17b20de9e44856675ced999b7836a65883ac57c9f3c9ad5f73066287"} Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.803712 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.803737 4758 scope.go:117] "RemoveContainer" containerID="21679f960e51878ddb064b4d7ad7fbc76c5dcdf3143e55291739f0ba963b83c7" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.805879 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7afc-account-create-update-4gfp5" event={"ID":"227bbca8-b963-4b39-af28-ac9dbf50bc73","Type":"ContainerStarted","Data":"a2c659612010a8683ce3073643412a9fa5344a2a97ed89cbf76212a7b290550d"} Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.835481 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-psn47" event={"ID":"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8","Type":"ContainerStarted","Data":"816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a"} Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.858952 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-psn47" podStartSLOduration=1.858932372 podStartE2EDuration="1.858932372s" podCreationTimestamp="2026-01-30 08:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:51:17.854549714 +0000 UTC m=+1282.826861265" watchObservedRunningTime="2026-01-30 08:51:17.858932372 +0000 UTC m=+1282.831243943" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.902229 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.909799 4758 scope.go:117] "RemoveContainer" containerID="ca13ba52ecd58780c742a36133f637fb5b60b222e69dea6e09fc29cd35f5fd19" Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.914735 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6bdfdc4b-wwqnt"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.973982 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 08:51:17 crc kubenswrapper[4758]: I0130 08:51:17.993682 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.098258 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.852259 4758 generic.go:334] "Generic (PLEG): container finished" podID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" containerID="67a4a1af734c4adf4de193490eeca87888430ce68cfa2191062d7093ddc838ba" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.852390 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" event={"ID":"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b","Type":"ContainerDied","Data":"67a4a1af734c4adf4de193490eeca87888430ce68cfa2191062d7093ddc838ba"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.852854 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" event={"ID":"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b","Type":"ContainerStarted","Data":"487a75230b0ad7c3beb61caa1e8344bbacf4bcec914f812e0fc95cfaaf6b32f2"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.855077 4758 generic.go:334] "Generic (PLEG): container finished" podID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" containerID="4f8803add769de9f15a23dbfbf03688310d1c6b3d5c93d79adba46cb16dec5ca" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.855391 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7kv6r" event={"ID":"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad","Type":"ContainerDied","Data":"4f8803add769de9f15a23dbfbf03688310d1c6b3d5c93d79adba46cb16dec5ca"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.855512 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7kv6r" event={"ID":"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad","Type":"ContainerStarted","Data":"6558b9481bf5d83a49bf2d63e49da05e1d21559501c33d60cc86bfb24fe3eccc"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.858740 4758 generic.go:334] "Generic (PLEG): container finished" podID="227bbca8-b963-4b39-af28-ac9dbf50bc73" containerID="2a553d18e8c8709e9420435d0699cca30262f29d812796ba71700ab0845f9d1c" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.858949 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7afc-account-create-update-4gfp5" event={"ID":"227bbca8-b963-4b39-af28-ac9dbf50bc73","Type":"ContainerDied","Data":"2a553d18e8c8709e9420435d0699cca30262f29d812796ba71700ab0845f9d1c"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.861540 4758 generic.go:334] "Generic (PLEG): container finished" podID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" containerID="6b5f2a511f68dead3ed1b1b92615f5c6856551f3b3f434c8b9484d4f99758a12" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.861627 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-psn47" event={"ID":"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8","Type":"ContainerDied","Data":"6b5f2a511f68dead3ed1b1b92615f5c6856551f3b3f434c8b9484d4f99758a12"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.866950 4758 generic.go:334] "Generic (PLEG): container finished" podID="6a7e2817-2850-4d53-8ebf-4977eea68664" containerID="54d62ade615f4d66aac7ea9231240e05dbce2b51363637c91db0cd5147899e3f" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.867023 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" event={"ID":"6a7e2817-2850-4d53-8ebf-4977eea68664","Type":"ContainerDied","Data":"54d62ade615f4d66aac7ea9231240e05dbce2b51363637c91db0cd5147899e3f"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.867065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" event={"ID":"6a7e2817-2850-4d53-8ebf-4977eea68664","Type":"ContainerStarted","Data":"cfdbfde9be2d4f234d018b8247245223ca5ce92f66264252d412902253aabb6e"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.871478 4758 generic.go:334] "Generic (PLEG): container finished" podID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" containerID="e022a3d99ce24d9dd10ef619189c15979640758b052ecd93de5065aa03c1b11a" exitCode=0 Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.871546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mvw7" event={"ID":"1d2dbd4c-2c5d-4865-8cf2-ce663e060369","Type":"ContainerDied","Data":"e022a3d99ce24d9dd10ef619189c15979640758b052ecd93de5065aa03c1b11a"} Jan 30 08:51:18 crc kubenswrapper[4758]: I0130 08:51:18.871589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mvw7" event={"ID":"1d2dbd4c-2c5d-4865-8cf2-ce663e060369","Type":"ContainerStarted","Data":"b4be5192e8d6318c22c8ad8a80caa6130b1d2b7b78c4b32785798458cd453187"} Jan 30 08:51:19 crc kubenswrapper[4758]: I0130 08:51:19.781083 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" path="/var/lib/kubelet/pods/82fd1f36-9f4f-441f-959d-e2eddc79c99b/volumes" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.404212 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.547096 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") pod \"6a7e2817-2850-4d53-8ebf-4977eea68664\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.547361 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") pod \"6a7e2817-2850-4d53-8ebf-4977eea68664\" (UID: \"6a7e2817-2850-4d53-8ebf-4977eea68664\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.548055 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a7e2817-2850-4d53-8ebf-4977eea68664" (UID: "6a7e2817-2850-4d53-8ebf-4977eea68664"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.555340 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj" (OuterVolumeSpecName: "kube-api-access-wl7xj") pod "6a7e2817-2850-4d53-8ebf-4977eea68664" (UID: "6a7e2817-2850-4d53-8ebf-4977eea68664"). InnerVolumeSpecName "kube-api-access-wl7xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.658774 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl7xj\" (UniqueName: \"kubernetes.io/projected/6a7e2817-2850-4d53-8ebf-4977eea68664-kube-api-access-wl7xj\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.658803 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7e2817-2850-4d53-8ebf-4977eea68664-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.759805 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.774256 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.786779 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.831089 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.851965 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865500 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") pod \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865573 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") pod \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\" (UID: \"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865719 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") pod \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865754 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") pod \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865786 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") pod \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\" (UID: \"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.865819 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") pod \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\" (UID: \"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.871030 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs" (OuterVolumeSpecName: "kube-api-access-q78gs") pod "a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" (UID: "a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad"). InnerVolumeSpecName "kube-api-access-q78gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.871512 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" (UID: "be7372d3-e1b3-4621-a40b-9b09fd3d7a3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.871901 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" (UID: "a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.872269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" (UID: "fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.882821 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx" (OuterVolumeSpecName: "kube-api-access-lflrx") pod "fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" (UID: "fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8"). InnerVolumeSpecName "kube-api-access-lflrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.887221 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b" (OuterVolumeSpecName: "kube-api-access-mqg5b") pod "be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" (UID: "be7372d3-e1b3-4621-a40b-9b09fd3d7a3b"). InnerVolumeSpecName "kube-api-access-mqg5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.906084 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" event={"ID":"be7372d3-e1b3-4621-a40b-9b09fd3d7a3b","Type":"ContainerDied","Data":"487a75230b0ad7c3beb61caa1e8344bbacf4bcec914f812e0fc95cfaaf6b32f2"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.906126 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="487a75230b0ad7c3beb61caa1e8344bbacf4bcec914f812e0fc95cfaaf6b32f2" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.906194 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-649b-account-create-update-fjrbb" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.920276 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7kv6r" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.920296 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7kv6r" event={"ID":"a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad","Type":"ContainerDied","Data":"6558b9481bf5d83a49bf2d63e49da05e1d21559501c33d60cc86bfb24fe3eccc"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.920379 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6558b9481bf5d83a49bf2d63e49da05e1d21559501c33d60cc86bfb24fe3eccc" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.924501 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7afc-account-create-update-4gfp5" event={"ID":"227bbca8-b963-4b39-af28-ac9dbf50bc73","Type":"ContainerDied","Data":"a2c659612010a8683ce3073643412a9fa5344a2a97ed89cbf76212a7b290550d"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.924575 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c659612010a8683ce3073643412a9fa5344a2a97ed89cbf76212a7b290550d" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.924678 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7afc-account-create-update-4gfp5" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.938404 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-psn47" event={"ID":"fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8","Type":"ContainerDied","Data":"816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.938448 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="816cd0811492a8932eeec53efde5a091f1bb457a9f64c154350071d6d85a9f2a" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.938511 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-psn47" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.951955 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" event={"ID":"6a7e2817-2850-4d53-8ebf-4977eea68664","Type":"ContainerDied","Data":"cfdbfde9be2d4f234d018b8247245223ca5ce92f66264252d412902253aabb6e"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.951997 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfdbfde9be2d4f234d018b8247245223ca5ce92f66264252d412902253aabb6e" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.952074 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e24-account-create-update-pkqqp" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.959125 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8mvw7" event={"ID":"1d2dbd4c-2c5d-4865-8cf2-ce663e060369","Type":"ContainerDied","Data":"b4be5192e8d6318c22c8ad8a80caa6130b1d2b7b78c4b32785798458cd453187"} Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.959334 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4be5192e8d6318c22c8ad8a80caa6130b1d2b7b78c4b32785798458cd453187" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.959399 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8mvw7" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.967897 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") pod \"227bbca8-b963-4b39-af28-ac9dbf50bc73\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.967951 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") pod \"227bbca8-b963-4b39-af28-ac9dbf50bc73\" (UID: \"227bbca8-b963-4b39-af28-ac9dbf50bc73\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.968143 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") pod \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.968245 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") pod \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\" (UID: \"1d2dbd4c-2c5d-4865-8cf2-ce663e060369\") " Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.968961 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d2dbd4c-2c5d-4865-8cf2-ce663e060369" (UID: "1d2dbd4c-2c5d-4865-8cf2-ce663e060369"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969164 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969191 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqg5b\" (UniqueName: \"kubernetes.io/projected/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b-kube-api-access-mqg5b\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969209 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969395 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969432 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lflrx\" (UniqueName: \"kubernetes.io/projected/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8-kube-api-access-lflrx\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969445 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q78gs\" (UniqueName: \"kubernetes.io/projected/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad-kube-api-access-q78gs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969457 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.969656 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "227bbca8-b963-4b39-af28-ac9dbf50bc73" (UID: "227bbca8-b963-4b39-af28-ac9dbf50bc73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.973735 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6" (OuterVolumeSpecName: "kube-api-access-q2zm6") pod "227bbca8-b963-4b39-af28-ac9dbf50bc73" (UID: "227bbca8-b963-4b39-af28-ac9dbf50bc73"). InnerVolumeSpecName "kube-api-access-q2zm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:20 crc kubenswrapper[4758]: I0130 08:51:20.975235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77" (OuterVolumeSpecName: "kube-api-access-kbr77") pod "1d2dbd4c-2c5d-4865-8cf2-ce663e060369" (UID: "1d2dbd4c-2c5d-4865-8cf2-ce663e060369"). InnerVolumeSpecName "kube-api-access-kbr77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:21 crc kubenswrapper[4758]: I0130 08:51:21.072410 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbr77\" (UniqueName: \"kubernetes.io/projected/1d2dbd4c-2c5d-4865-8cf2-ce663e060369-kube-api-access-kbr77\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:21 crc kubenswrapper[4758]: I0130 08:51:21.072456 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2zm6\" (UniqueName: \"kubernetes.io/projected/227bbca8-b963-4b39-af28-ac9dbf50bc73-kube-api-access-q2zm6\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:21 crc kubenswrapper[4758]: I0130 08:51:21.072470 4758 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/227bbca8-b963-4b39-af28-ac9dbf50bc73-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.056437 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.056995 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-log" containerID="cri-o://c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be" gracePeriod=30 Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.057089 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-httpd" containerID="cri-o://133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3" gracePeriod=30 Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.387247 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.387325 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.977246 4758 generic.go:334] "Generic (PLEG): container finished" podID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerID="c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be" exitCode=143 Jan 30 08:51:22 crc kubenswrapper[4758]: I0130 08:51:22.977563 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerDied","Data":"c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be"} Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.678090 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.742122 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.742333 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" containerID="cri-o://76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" gracePeriod=30 Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.742736 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" containerID="cri-o://c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" gracePeriod=30 Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.990048 4758 generic.go:334] "Generic (PLEG): container finished" podID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerID="76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" exitCode=143 Jan 30 08:51:23 crc kubenswrapper[4758]: I0130 08:51:23.990308 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerDied","Data":"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1"} Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.013088 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.026244 4758 generic.go:334] "Generic (PLEG): container finished" podID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerID="133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3" exitCode=0 Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.026291 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerDied","Data":"133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3"} Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.026319 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ec2d231-4b26-4cc9-a09d-9091153da8a9","Type":"ContainerDied","Data":"a7359b09c33ee03bcd96bf4b4d5dd4f4f882e30e5f48059a01c80017926d9869"} Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.026330 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7359b09c33ee03bcd96bf4b4d5dd4f4f882e30e5f48059a01c80017926d9869" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.027087 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.044940 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.179985 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180592 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180527 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs" (OuterVolumeSpecName: "logs") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180658 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180901 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180923 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.180987 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.181014 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.181051 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") pod \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\" (UID: \"1ec2d231-4b26-4cc9-a09d-9091153da8a9\") " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.181465 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.181480 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ec2d231-4b26-4cc9-a09d-9091153da8a9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.186718 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.188670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts" (OuterVolumeSpecName: "scripts") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.191014 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4" (OuterVolumeSpecName: "kube-api-access-9mbs4") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "kube-api-access-9mbs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.252222 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.274079 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data" (OuterVolumeSpecName: "config-data") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.287993 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.288025 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.288051 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.288071 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.288081 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mbs4\" (UniqueName: \"kubernetes.io/projected/1ec2d231-4b26-4cc9-a09d-9091153da8a9-kube-api-access-9mbs4\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.301624 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1ec2d231-4b26-4cc9-a09d-9091153da8a9" (UID: "1ec2d231-4b26-4cc9-a09d-9091153da8a9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.317140 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.389474 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.389513 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ec2d231-4b26-4cc9-a09d-9091153da8a9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665220 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665803 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-api" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665816 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-api" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665830 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-log" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665836 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-log" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665847 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665854 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665865 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665871 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665888 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-log" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665894 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-log" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665904 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665911 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665921 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665928 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665943 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227bbca8-b963-4b39-af28-ac9dbf50bc73" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665948 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="227bbca8-b963-4b39-af28-ac9dbf50bc73" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665960 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7e2817-2850-4d53-8ebf-4977eea68664" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665966 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7e2817-2850-4d53-8ebf-4977eea68664" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: E0130 08:51:26.665976 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-httpd" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.665982 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-httpd" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666186 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-api" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666198 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="227bbca8-b963-4b39-af28-ac9dbf50bc73" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666210 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666222 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-log" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666230 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666238 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fd1f36-9f4f-441f-959d-e2eddc79c99b" containerName="placement-log" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666250 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" containerName="glance-httpd" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666259 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666269 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" containerName="mariadb-database-create" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666279 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7e2817-2850-4d53-8ebf-4977eea68664" containerName="mariadb-account-create-update" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.666846 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.670423 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wlml7" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.671812 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.678799 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.698511 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.802592 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.802724 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.802779 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.802831 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.904712 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.904764 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.904941 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.905145 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.912114 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.913909 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.916646 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.928850 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") pod \"nova-cell0-conductor-db-sync-chr6l\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:26 crc kubenswrapper[4758]: I0130 08:51:26.984311 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.042481 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.102467 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.156640 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.168914 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.170424 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.174584 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.174831 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.218289 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328254 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328330 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-logs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328373 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6glh8\" (UniqueName: \"kubernetes.io/projected/9e6a95ad-6f31-4494-9caf-5eea1c43e005-kube-api-access-6glh8\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328428 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328456 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328479 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328504 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.328519 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430443 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-logs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430502 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6glh8\" (UniqueName: \"kubernetes.io/projected/9e6a95ad-6f31-4494-9caf-5eea1c43e005-kube-api-access-6glh8\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430584 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430608 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430632 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430649 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.430699 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.439612 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.443424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.446369 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.449258 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e6a95ad-6f31-4494-9caf-5eea1c43e005-logs\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.455619 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-scripts\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.468620 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.474838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6a95ad-6f31-4494-9caf-5eea1c43e005-config-data\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.480921 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6glh8\" (UniqueName: \"kubernetes.io/projected/9e6a95ad-6f31-4494-9caf-5eea1c43e005-kube-api-access-6glh8\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.500530 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"9e6a95ad-6f31-4494-9caf-5eea1c43e005\") " pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.530519 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.672641 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": dial tcp 10.217.0.150:9292: connect: connection refused" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.673141 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": dial tcp 10.217.0.150:9292: connect: connection refused" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.696545 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.795958 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec2d231-4b26-4cc9-a09d-9091153da8a9" path="/var/lib/kubelet/pods/1ec2d231-4b26-4cc9-a09d-9091153da8a9/volumes" Jan 30 08:51:27 crc kubenswrapper[4758]: I0130 08:51:27.914209 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.060762 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chr6l" event={"ID":"06a33948-1e21-49dd-9f48-b4c188ae6e9d","Type":"ContainerStarted","Data":"ff53183262e2ed46b22c3c8849edd4292f90605608d79f5735d4ef8ac3de71f3"} Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063477 4758 generic.go:334] "Generic (PLEG): container finished" podID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerID="c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" exitCode=0 Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063511 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerDied","Data":"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea"} Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063534 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f","Type":"ContainerDied","Data":"09ea2b79cbc0aef888ba0674456b14d2b50865632ce01fc1d4296c4eec420a06"} Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063560 4758 scope.go:117] "RemoveContainer" containerID="c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.063748 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064220 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064318 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064412 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064494 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064611 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064665 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.064798 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") pod \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\" (UID: \"5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f\") " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.067906 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs" (OuterVolumeSpecName: "logs") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.068895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.082637 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.113751 4758 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.113787 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.123931 4758 scope.go:117] "RemoveContainer" containerID="76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.124112 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts" (OuterVolumeSpecName: "scripts") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.148200 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.148321 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9" (OuterVolumeSpecName: "kube-api-access-mmcr9") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "kube-api-access-mmcr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.197093 4758 scope.go:117] "RemoveContainer" containerID="c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" Jan 30 08:51:28 crc kubenswrapper[4758]: E0130 08:51:28.198740 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea\": container with ID starting with c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea not found: ID does not exist" containerID="c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.198774 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea"} err="failed to get container status \"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea\": rpc error: code = NotFound desc = could not find container \"c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea\": container with ID starting with c0fd4656c6acfde15aace8abf18c71ba1348028540fdc7a532798d50b049a6ea not found: ID does not exist" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.198804 4758 scope.go:117] "RemoveContainer" containerID="76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" Jan 30 08:51:28 crc kubenswrapper[4758]: E0130 08:51:28.204006 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1\": container with ID starting with 76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1 not found: ID does not exist" containerID="76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.204794 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1"} err="failed to get container status \"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1\": rpc error: code = NotFound desc = could not find container \"76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1\": container with ID starting with 76f56a58dfc42682e6eeddd85effca6e175d39c50b373495acd452c62cdeeee1 not found: ID does not exist" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.216409 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.216439 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.216448 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmcr9\" (UniqueName: \"kubernetes.io/projected/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-kube-api-access-mmcr9\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.247435 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data" (OuterVolumeSpecName: "config-data") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.247876 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.257602 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" (UID: "5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.280582 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.323923 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.323956 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.323970 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.323983 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.353656 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.435468 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.448093 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.472686 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: E0130 08:51:28.477782 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.477812 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" Jan 30 08:51:28 crc kubenswrapper[4758]: E0130 08:51:28.477828 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.477833 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.478095 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-httpd" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.478117 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" containerName="glance-log" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.479158 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.488316 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.499334 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.500607 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637603 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637694 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637779 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637827 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637855 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.637888 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmm5q\" (UniqueName: \"kubernetes.io/projected/7fa932ed-7bd7-4827-a24f-e29c15c9b563-kube-api-access-gmm5q\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.638001 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.638024 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.739853 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740221 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740304 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740337 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740400 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740448 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740471 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.740495 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmm5q\" (UniqueName: \"kubernetes.io/projected/7fa932ed-7bd7-4827-a24f-e29c15c9b563-kube-api-access-gmm5q\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.741920 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.744988 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.745710 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa932ed-7bd7-4827-a24f-e29c15c9b563-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.764641 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.766601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.772383 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.773218 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmm5q\" (UniqueName: \"kubernetes.io/projected/7fa932ed-7bd7-4827-a24f-e29c15c9b563-kube-api-access-gmm5q\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.794332 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa932ed-7bd7-4827-a24f-e29c15c9b563-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.836782 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa932ed-7bd7-4827-a24f-e29c15c9b563\") " pod="openstack/glance-default-internal-api-0" Jan 30 08:51:28 crc kubenswrapper[4758]: I0130 08:51:28.917795 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:29 crc kubenswrapper[4758]: I0130 08:51:29.119092 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e6a95ad-6f31-4494-9caf-5eea1c43e005","Type":"ContainerStarted","Data":"391a2503f40d24ef8468ab0cf86e82db544fd638bbd7f6f4bdf182461131be1e"} Jan 30 08:51:29 crc kubenswrapper[4758]: W0130 08:51:29.763288 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa932ed_7bd7_4827_a24f_e29c15c9b563.slice/crio-c4442073f86fc6b56fa9e8dbc0c6024a6c77fcc591509ca090b5c5136280362c WatchSource:0}: Error finding container c4442073f86fc6b56fa9e8dbc0c6024a6c77fcc591509ca090b5c5136280362c: Status 404 returned error can't find the container with id c4442073f86fc6b56fa9e8dbc0c6024a6c77fcc591509ca090b5c5136280362c Jan 30 08:51:29 crc kubenswrapper[4758]: I0130 08:51:29.764111 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:51:29 crc kubenswrapper[4758]: E0130 08:51:29.764310 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:51:29 crc kubenswrapper[4758]: E0130 08:51:29.764330 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:51:29 crc kubenswrapper[4758]: E0130 08:51:29.764387 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:52:01.764371731 +0000 UTC m=+1326.736683282 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:51:29 crc kubenswrapper[4758]: I0130 08:51:29.793201 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f" path="/var/lib/kubelet/pods/5af263c7-b4ef-4cd9-bf61-2caa6ce1a43f/volumes" Jan 30 08:51:29 crc kubenswrapper[4758]: I0130 08:51:29.794073 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 08:51:30 crc kubenswrapper[4758]: I0130 08:51:30.197852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e6a95ad-6f31-4494-9caf-5eea1c43e005","Type":"ContainerStarted","Data":"e0d2754618477bb743ceff0dadebc2378bd1d077661a715cd83f31912f354dcf"} Jan 30 08:51:30 crc kubenswrapper[4758]: I0130 08:51:30.206141 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa932ed-7bd7-4827-a24f-e29c15c9b563","Type":"ContainerStarted","Data":"c4442073f86fc6b56fa9e8dbc0c6024a6c77fcc591509ca090b5c5136280362c"} Jan 30 08:51:30 crc kubenswrapper[4758]: I0130 08:51:30.331256 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d6d3f5e9-e330-476b-be63-775114f987e6" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.173:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:51:31 crc kubenswrapper[4758]: I0130 08:51:31.267689 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa932ed-7bd7-4827-a24f-e29c15c9b563","Type":"ContainerStarted","Data":"3abe4e9fe60381a5aef6d17c1d31d12ae165a4d405b4c3caadd447c18bfde8b4"} Jan 30 08:51:32 crc kubenswrapper[4758]: I0130 08:51:32.288911 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9e6a95ad-6f31-4494-9caf-5eea1c43e005","Type":"ContainerStarted","Data":"b35731ce8d97af63524dc2599a821b9957ee8d4fbfb0c8d7c135f659261e2579"} Jan 30 08:51:32 crc kubenswrapper[4758]: I0130 08:51:32.298513 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa932ed-7bd7-4827-a24f-e29c15c9b563","Type":"ContainerStarted","Data":"1f0c188b26791f3f363cf7d39f9343081b6efaa5d32eed1b6bb8fa592574a9f7"} Jan 30 08:51:32 crc kubenswrapper[4758]: I0130 08:51:32.328187 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.328157757 podStartE2EDuration="5.328157757s" podCreationTimestamp="2026-01-30 08:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:51:32.317304352 +0000 UTC m=+1297.289615893" watchObservedRunningTime="2026-01-30 08:51:32.328157757 +0000 UTC m=+1297.300469318" Jan 30 08:51:32 crc kubenswrapper[4758]: I0130 08:51:32.354142 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.354122812 podStartE2EDuration="4.354122812s" podCreationTimestamp="2026-01-30 08:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:51:32.349542996 +0000 UTC m=+1297.321854547" watchObservedRunningTime="2026-01-30 08:51:32.354122812 +0000 UTC m=+1297.326434363" Jan 30 08:51:36 crc kubenswrapper[4758]: I0130 08:51:36.010939 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:51:36 crc kubenswrapper[4758]: I0130 08:51:36.044577 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:51:37 crc kubenswrapper[4758]: I0130 08:51:37.531358 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:51:37 crc kubenswrapper[4758]: I0130 08:51:37.531706 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 08:51:37 crc kubenswrapper[4758]: I0130 08:51:37.575181 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:51:37 crc kubenswrapper[4758]: I0130 08:51:37.579275 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.373825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.373879 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.919348 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.919702 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:38 crc kubenswrapper[4758]: I0130 08:51:38.981961 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:39 crc kubenswrapper[4758]: I0130 08:51:39.044823 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:39 crc kubenswrapper[4758]: I0130 08:51:39.382151 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:39 crc kubenswrapper[4758]: I0130 08:51:39.382180 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:41 crc kubenswrapper[4758]: I0130 08:51:41.402865 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:51:41 crc kubenswrapper[4758]: I0130 08:51:41.403203 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:51:42 crc kubenswrapper[4758]: I0130 08:51:42.442022 4758 generic.go:334] "Generic (PLEG): container finished" podID="456181b2-373c-4e15-abaf-b35287b20b59" containerID="67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5" exitCode=137 Jan 30 08:51:42 crc kubenswrapper[4758]: I0130 08:51:42.442206 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5"} Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.038891 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.039977 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.501686 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456181b2-373c-4e15-abaf-b35287b20b59","Type":"ContainerDied","Data":"d712f3f39456f410b004f641e198f9ee1a339ca9d1e26fe4b3178bcb4028fc36"} Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.502030 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d712f3f39456f410b004f641e198f9ee1a339ca9d1e26fe4b3178bcb4028fc36" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.521777 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.521895 4758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.556749 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.568149 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714314 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714741 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714784 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714807 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714893 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714945 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.714978 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") pod \"456181b2-373c-4e15-abaf-b35287b20b59\" (UID: \"456181b2-373c-4e15-abaf-b35287b20b59\") " Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.717514 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.719622 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.728342 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l" (OuterVolumeSpecName: "kube-api-access-x2c7l") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "kube-api-access-x2c7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.733273 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts" (OuterVolumeSpecName: "scripts") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.818006 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.818085 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.818098 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2c7l\" (UniqueName: \"kubernetes.io/projected/456181b2-373c-4e15-abaf-b35287b20b59-kube-api-access-x2c7l\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.818109 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456181b2-373c-4e15-abaf-b35287b20b59-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.850257 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.856175 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.866181 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.913239 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data" (OuterVolumeSpecName: "config-data") pod "456181b2-373c-4e15-abaf-b35287b20b59" (UID: "456181b2-373c-4e15-abaf-b35287b20b59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.920264 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.920499 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:44 crc kubenswrapper[4758]: I0130 08:51:44.920564 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456181b2-373c-4e15-abaf-b35287b20b59-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.511122 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chr6l" event={"ID":"06a33948-1e21-49dd-9f48-b4c188ae6e9d","Type":"ContainerStarted","Data":"638aaa1aba025b2a9d201ffacd37903c3bef07833c13c4c8b9d80743c4260d8f"} Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.511198 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.532297 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-chr6l" podStartSLOduration=2.854086464 podStartE2EDuration="19.532280647s" podCreationTimestamp="2026-01-30 08:51:26 +0000 UTC" firstStartedPulling="2026-01-30 08:51:27.710198871 +0000 UTC m=+1292.682510422" lastFinishedPulling="2026-01-30 08:51:44.388393054 +0000 UTC m=+1309.360704605" observedRunningTime="2026-01-30 08:51:45.531336086 +0000 UTC m=+1310.503647647" watchObservedRunningTime="2026-01-30 08:51:45.532280647 +0000 UTC m=+1310.504592198" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.575799 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.582614 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594112 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:45 crc kubenswrapper[4758]: E0130 08:51:45.594526 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-notification-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594544 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-notification-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: E0130 08:51:45.594559 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-central-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594565 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-central-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: E0130 08:51:45.594587 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594593 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" Jan 30 08:51:45 crc kubenswrapper[4758]: E0130 08:51:45.594605 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="sg-core" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594611 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="sg-core" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594796 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="proxy-httpd" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594816 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-notification-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594832 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="ceilometer-central-agent" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.594843 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="456181b2-373c-4e15-abaf-b35287b20b59" containerName="sg-core" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.602941 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.605993 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.613433 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.621497 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634584 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634719 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634746 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634764 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634797 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634820 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.634850 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.736630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.736958 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.736988 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737152 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737032 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737789 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737830 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.737928 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.738163 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.748966 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.749513 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.750737 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.772276 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.819530 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") pod \"ceilometer-0\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " pod="openstack/ceilometer-0" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.853184 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456181b2-373c-4e15-abaf-b35287b20b59" path="/var/lib/kubelet/pods/456181b2-373c-4e15-abaf-b35287b20b59/volumes" Jan 30 08:51:45 crc kubenswrapper[4758]: I0130 08:51:45.922452 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.012066 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.012163 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.013104 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0"} pod="openstack/horizon-76fc974bd8-4mnvj" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.013151 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" containerID="cri-o://e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0" gracePeriod=30 Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.046897 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.046985 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.047693 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150"} pod="openstack/horizon-5cf698bb7b-gp87v" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.047736 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" containerID="cri-o://a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150" gracePeriod=30 Jan 30 08:51:46 crc kubenswrapper[4758]: I0130 08:51:46.568563 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:47 crc kubenswrapper[4758]: I0130 08:51:47.540880 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"e237696dd8d561d61802d4c992bbd607d9c5b3b20249129551ad1a402c39f754"} Jan 30 08:51:48 crc kubenswrapper[4758]: I0130 08:51:48.549702 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22"} Jan 30 08:51:49 crc kubenswrapper[4758]: I0130 08:51:49.559860 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7"} Jan 30 08:51:50 crc kubenswrapper[4758]: I0130 08:51:50.571422 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb"} Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.387509 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.387825 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.610826 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerStarted","Data":"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9"} Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.611018 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:51:52 crc kubenswrapper[4758]: I0130 08:51:52.643941 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.837196706 podStartE2EDuration="7.64390948s" podCreationTimestamp="2026-01-30 08:51:45 +0000 UTC" firstStartedPulling="2026-01-30 08:51:46.639820664 +0000 UTC m=+1311.612132215" lastFinishedPulling="2026-01-30 08:51:51.446533438 +0000 UTC m=+1316.418844989" observedRunningTime="2026-01-30 08:51:52.636278518 +0000 UTC m=+1317.608590089" watchObservedRunningTime="2026-01-30 08:51:52.64390948 +0000 UTC m=+1317.616221031" Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.394670 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.395689 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-central-agent" containerID="cri-o://0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" gracePeriod=30 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.396248 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="proxy-httpd" containerID="cri-o://418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" gracePeriod=30 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.396336 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="sg-core" containerID="cri-o://0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" gracePeriod=30 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.396381 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-notification-agent" containerID="cri-o://5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" gracePeriod=30 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.653201 4758 generic.go:334] "Generic (PLEG): container finished" podID="63c569ce-66c7-4001-8478-3f20fb34b143" containerID="418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" exitCode=0 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.653495 4758 generic.go:334] "Generic (PLEG): container finished" podID="63c569ce-66c7-4001-8478-3f20fb34b143" containerID="0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" exitCode=2 Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.653435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9"} Jan 30 08:51:57 crc kubenswrapper[4758]: I0130 08:51:57.653526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb"} Jan 30 08:51:58 crc kubenswrapper[4758]: I0130 08:51:58.664185 4758 generic.go:334] "Generic (PLEG): container finished" podID="63c569ce-66c7-4001-8478-3f20fb34b143" containerID="5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" exitCode=0 Jan 30 08:51:58 crc kubenswrapper[4758]: I0130 08:51:58.664231 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7"} Jan 30 08:51:59 crc kubenswrapper[4758]: I0130 08:51:59.021621 4758 scope.go:117] "RemoveContainer" containerID="cf06190608af651f83eb58cbffe05edad66dc7e1e8fac5df28ea762a91e937ce" Jan 30 08:51:59 crc kubenswrapper[4758]: I0130 08:51:59.046983 4758 scope.go:117] "RemoveContainer" containerID="4ac71eee89e7259c48423a040dbed7ad9542787e6ed5a91e7cacd0c60611852e" Jan 30 08:51:59 crc kubenswrapper[4758]: I0130 08:51:59.067996 4758 scope.go:117] "RemoveContainer" containerID="69a7ca6a35440c8d6a6b5c64641df205b1f58c73899c4a6f6ad67e7fef9baab8" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.497248 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.608361 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.608749 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.608950 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609063 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609097 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609135 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") pod \"63c569ce-66c7-4001-8478-3f20fb34b143\" (UID: \"63c569ce-66c7-4001-8478-3f20fb34b143\") " Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609445 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609517 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609711 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.609734 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63c569ce-66c7-4001-8478-3f20fb34b143-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.615176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts" (OuterVolumeSpecName: "scripts") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.627060 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk" (OuterVolumeSpecName: "kube-api-access-4bqwk") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "kube-api-access-4bqwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.650206 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.682229 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693707 4758 generic.go:334] "Generic (PLEG): container finished" podID="63c569ce-66c7-4001-8478-3f20fb34b143" containerID="0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" exitCode=0 Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693755 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22"} Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693789 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63c569ce-66c7-4001-8478-3f20fb34b143","Type":"ContainerDied","Data":"e237696dd8d561d61802d4c992bbd607d9c5b3b20249129551ad1a402c39f754"} Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693809 4758 scope.go:117] "RemoveContainer" containerID="418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.693966 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.711984 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.712012 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.712022 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.712031 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bqwk\" (UniqueName: \"kubernetes.io/projected/63c569ce-66c7-4001-8478-3f20fb34b143-kube-api-access-4bqwk\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.721869 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data" (OuterVolumeSpecName: "config-data") pod "63c569ce-66c7-4001-8478-3f20fb34b143" (UID: "63c569ce-66c7-4001-8478-3f20fb34b143"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.751571 4758 scope.go:117] "RemoveContainer" containerID="0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.775874 4758 scope.go:117] "RemoveContainer" containerID="5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.808776 4758 scope.go:117] "RemoveContainer" containerID="0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.813318 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.813604 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.813691 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.813839 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:53:05.813820145 +0000 UTC m=+1390.786131696 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.813721 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c569ce-66c7-4001-8478-3f20fb34b143-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.831548 4758 scope.go:117] "RemoveContainer" containerID="418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.838704 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9\": container with ID starting with 418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9 not found: ID does not exist" containerID="418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.838812 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9"} err="failed to get container status \"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9\": rpc error: code = NotFound desc = could not find container \"418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9\": container with ID starting with 418812c1bf3a338cf131e672c286343fd78f15b3e56399f54f0a8119e87f8be9 not found: ID does not exist" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.838888 4758 scope.go:117] "RemoveContainer" containerID="0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.839263 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb\": container with ID starting with 0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb not found: ID does not exist" containerID="0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839287 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb"} err="failed to get container status \"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb\": rpc error: code = NotFound desc = could not find container \"0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb\": container with ID starting with 0b2477fcd00d759172ac28f6eae8302734acf98a6998acd6de9b5fb7c35b75bb not found: ID does not exist" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839301 4758 scope.go:117] "RemoveContainer" containerID="5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.839508 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7\": container with ID starting with 5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7 not found: ID does not exist" containerID="5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839528 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7"} err="failed to get container status \"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7\": rpc error: code = NotFound desc = could not find container \"5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7\": container with ID starting with 5252c303e21cdbb1ce7131139e94eb3c4b1dd1a66e2e0f4e048b756a5bc879d7 not found: ID does not exist" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839542 4758 scope.go:117] "RemoveContainer" containerID="0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" Jan 30 08:52:01 crc kubenswrapper[4758]: E0130 08:52:01.839873 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22\": container with ID starting with 0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22 not found: ID does not exist" containerID="0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22" Jan 30 08:52:01 crc kubenswrapper[4758]: I0130 08:52:01.839930 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22"} err="failed to get container status \"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22\": rpc error: code = NotFound desc = could not find container \"0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22\": container with ID starting with 0771623c29ed7bc62473fcf73d9e188aee038db5b06303a7a6a3fd74dc736f22 not found: ID does not exist" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.018777 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.026897 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.046475 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: E0130 08:52:02.046956 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-central-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.046980 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-central-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: E0130 08:52:02.046994 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="sg-core" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047002 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="sg-core" Jan 30 08:52:02 crc kubenswrapper[4758]: E0130 08:52:02.047024 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-notification-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047032 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-notification-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: E0130 08:52:02.047372 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="proxy-httpd" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047390 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="proxy-httpd" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047605 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="sg-core" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047630 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-central-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047646 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="ceilometer-notification-agent" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.047669 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" containerName="proxy-httpd" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.049760 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.059386 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.059549 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.065795 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120103 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120154 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120182 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120212 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120319 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.120340 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222299 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222410 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222498 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222608 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222625 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.222655 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.223110 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.223721 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.226868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.226956 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.227851 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.228888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.244082 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") pod \"ceilometer-0\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.366102 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.850958 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:02 crc kubenswrapper[4758]: W0130 08:52:02.907150 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8782f55b_f6c6_4fa1_bf56_1cea92cac9c9.slice/crio-65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981 WatchSource:0}: Error finding container 65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981: Status 404 returned error can't find the container with id 65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981 Jan 30 08:52:02 crc kubenswrapper[4758]: I0130 08:52:02.917461 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.711363 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e"} Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.711668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981"} Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.712844 4758 generic.go:334] "Generic (PLEG): container finished" podID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" containerID="638aaa1aba025b2a9d201ffacd37903c3bef07833c13c4c8b9d80743c4260d8f" exitCode=0 Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.712873 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chr6l" event={"ID":"06a33948-1e21-49dd-9f48-b4c188ae6e9d","Type":"ContainerDied","Data":"638aaa1aba025b2a9d201ffacd37903c3bef07833c13c4c8b9d80743c4260d8f"} Jan 30 08:52:03 crc kubenswrapper[4758]: I0130 08:52:03.780802 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c569ce-66c7-4001-8478-3f20fb34b143" path="/var/lib/kubelet/pods/63c569ce-66c7-4001-8478-3f20fb34b143/volumes" Jan 30 08:52:04 crc kubenswrapper[4758]: I0130 08:52:04.722572 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54"} Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.056375 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.200763 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") pod \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.200859 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") pod \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.200918 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") pod \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.200949 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") pod \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\" (UID: \"06a33948-1e21-49dd-9f48-b4c188ae6e9d\") " Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.206165 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts" (OuterVolumeSpecName: "scripts") pod "06a33948-1e21-49dd-9f48-b4c188ae6e9d" (UID: "06a33948-1e21-49dd-9f48-b4c188ae6e9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.206350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr" (OuterVolumeSpecName: "kube-api-access-4s5dr") pod "06a33948-1e21-49dd-9f48-b4c188ae6e9d" (UID: "06a33948-1e21-49dd-9f48-b4c188ae6e9d"). InnerVolumeSpecName "kube-api-access-4s5dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.240330 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data" (OuterVolumeSpecName: "config-data") pod "06a33948-1e21-49dd-9f48-b4c188ae6e9d" (UID: "06a33948-1e21-49dd-9f48-b4c188ae6e9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.247505 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06a33948-1e21-49dd-9f48-b4c188ae6e9d" (UID: "06a33948-1e21-49dd-9f48-b4c188ae6e9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.303626 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.303662 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.303693 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s5dr\" (UniqueName: \"kubernetes.io/projected/06a33948-1e21-49dd-9f48-b4c188ae6e9d-kube-api-access-4s5dr\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.303701 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06a33948-1e21-49dd-9f48-b4c188ae6e9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.736278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-chr6l" event={"ID":"06a33948-1e21-49dd-9f48-b4c188ae6e9d","Type":"ContainerDied","Data":"ff53183262e2ed46b22c3c8849edd4292f90605608d79f5735d4ef8ac3de71f3"} Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.737722 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff53183262e2ed46b22c3c8849edd4292f90605608d79f5735d4ef8ac3de71f3" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.736567 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-chr6l" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.741326 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241"} Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.863611 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:05 crc kubenswrapper[4758]: E0130 08:52:05.864030 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" containerName="nova-cell0-conductor-db-sync" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.864064 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" containerName="nova-cell0-conductor-db-sync" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.864311 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" containerName="nova-cell0-conductor-db-sync" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.865171 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.871091 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wlml7" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.871195 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 08:52:05 crc kubenswrapper[4758]: I0130 08:52:05.900074 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.040028 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.040107 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.040272 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.141791 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.141864 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.141894 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.149928 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.155224 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.159604 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") pod \"nova-cell0-conductor-0\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.180765 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.704409 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:06 crc kubenswrapper[4758]: I0130 08:52:06.780187 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"35f68976-d7c9-453e-a02c-cd4119a55e3b","Type":"ContainerStarted","Data":"d7166f91e2c58a7c0393d358a9d02c9055600e84476d26b20a2d3fae382bf66f"} Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.519580 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.789546 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"35f68976-d7c9-453e-a02c-cd4119a55e3b","Type":"ContainerStarted","Data":"57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e"} Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.790656 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.792440 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerStarted","Data":"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27"} Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.793675 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.820607 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.8205847569999998 podStartE2EDuration="2.820584757s" podCreationTimestamp="2026-01-30 08:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:07.810422774 +0000 UTC m=+1332.782734325" watchObservedRunningTime="2026-01-30 08:52:07.820584757 +0000 UTC m=+1332.792896308" Jan 30 08:52:07 crc kubenswrapper[4758]: I0130 08:52:07.861226 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.782653614 podStartE2EDuration="5.861206438s" podCreationTimestamp="2026-01-30 08:52:02 +0000 UTC" firstStartedPulling="2026-01-30 08:52:02.917110507 +0000 UTC m=+1327.889422058" lastFinishedPulling="2026-01-30 08:52:06.995663341 +0000 UTC m=+1331.967974882" observedRunningTime="2026-01-30 08:52:07.856416405 +0000 UTC m=+1332.828727956" watchObservedRunningTime="2026-01-30 08:52:07.861206438 +0000 UTC m=+1332.833517989" Jan 30 08:52:08 crc kubenswrapper[4758]: I0130 08:52:08.801735 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" gracePeriod=30 Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.231773 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.820983 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-central-agent" containerID="cri-o://7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" gracePeriod=30 Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.821125 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="proxy-httpd" containerID="cri-o://64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" gracePeriod=30 Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.821159 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="sg-core" containerID="cri-o://0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" gracePeriod=30 Jan 30 08:52:10 crc kubenswrapper[4758]: I0130 08:52:10.821266 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-notification-agent" containerID="cri-o://e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" gracePeriod=30 Jan 30 08:52:11 crc kubenswrapper[4758]: E0130 08:52:11.183813 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:11 crc kubenswrapper[4758]: E0130 08:52:11.189001 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:11 crc kubenswrapper[4758]: E0130 08:52:11.191483 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:11 crc kubenswrapper[4758]: E0130 08:52:11.191594 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.833755 4758 generic.go:334] "Generic (PLEG): container finished" podID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerID="64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" exitCode=0 Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.834009 4758 generic.go:334] "Generic (PLEG): container finished" podID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerID="0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" exitCode=2 Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.834019 4758 generic.go:334] "Generic (PLEG): container finished" podID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerID="e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" exitCode=0 Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.833853 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27"} Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.834065 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241"} Jan 30 08:52:11 crc kubenswrapper[4758]: I0130 08:52:11.834075 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54"} Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.309420 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.407743 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.407894 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.407923 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408002 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408059 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408077 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408187 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") pod \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\" (UID: \"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9\") " Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408257 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.408797 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.409336 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.409352 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.426838 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk" (OuterVolumeSpecName: "kube-api-access-8xtwk") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "kube-api-access-8xtwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.428305 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts" (OuterVolumeSpecName: "scripts") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.441725 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.475547 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.496835 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data" (OuterVolumeSpecName: "config-data") pod "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" (UID: "8782f55b-f6c6-4fa1-bf56-1cea92cac9c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511681 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511720 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511732 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xtwk\" (UniqueName: \"kubernetes.io/projected/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-kube-api-access-8xtwk\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511745 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.511756 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.846960 4758 generic.go:334] "Generic (PLEG): container finished" podID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerID="7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" exitCode=0 Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.847001 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.847018 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e"} Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.856195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8782f55b-f6c6-4fa1-bf56-1cea92cac9c9","Type":"ContainerDied","Data":"65dc36a2f4e4d15659ca98b06b09b9503be01a5727ed309a172875a20e2d3981"} Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.856223 4758 scope.go:117] "RemoveContainer" containerID="64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.902397 4758 scope.go:117] "RemoveContainer" containerID="0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.907610 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.921906 4758 scope.go:117] "RemoveContainer" containerID="e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.926889 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.950703 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:12 crc kubenswrapper[4758]: E0130 08:52:12.951064 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="proxy-httpd" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951080 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="proxy-httpd" Jan 30 08:52:12 crc kubenswrapper[4758]: E0130 08:52:12.951099 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="sg-core" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951107 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="sg-core" Jan 30 08:52:12 crc kubenswrapper[4758]: E0130 08:52:12.951122 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-notification-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951127 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-notification-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: E0130 08:52:12.951134 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-central-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951139 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-central-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951296 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="proxy-httpd" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951312 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-central-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951319 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="ceilometer-notification-agent" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.951326 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" containerName="sg-core" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.952835 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.961990 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.962462 4758 scope.go:117] "RemoveContainer" containerID="7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" Jan 30 08:52:12 crc kubenswrapper[4758]: I0130 08:52:12.963317 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.032134 4758 scope.go:117] "RemoveContainer" containerID="64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" Jan 30 08:52:13 crc kubenswrapper[4758]: E0130 08:52:13.032814 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27\": container with ID starting with 64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27 not found: ID does not exist" containerID="64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.032878 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27"} err="failed to get container status \"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27\": rpc error: code = NotFound desc = could not find container \"64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27\": container with ID starting with 64db4dda53de6fe4d194b39dd9d43292306390279cfc422020a1abc0f943ef27 not found: ID does not exist" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.032911 4758 scope.go:117] "RemoveContainer" containerID="0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" Jan 30 08:52:13 crc kubenswrapper[4758]: E0130 08:52:13.033417 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241\": container with ID starting with 0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241 not found: ID does not exist" containerID="0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.033472 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241"} err="failed to get container status \"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241\": rpc error: code = NotFound desc = could not find container \"0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241\": container with ID starting with 0305e652253a5b97d8d60c8a169f18194ef2b78f7ab96e972d5e00669d722241 not found: ID does not exist" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.033506 4758 scope.go:117] "RemoveContainer" containerID="e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" Jan 30 08:52:13 crc kubenswrapper[4758]: E0130 08:52:13.033869 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54\": container with ID starting with e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54 not found: ID does not exist" containerID="e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.033896 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54"} err="failed to get container status \"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54\": rpc error: code = NotFound desc = could not find container \"e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54\": container with ID starting with e28629a943c46a443cfafd206a3bae02c8f924b150fc01c6c9ef0d2c64483f54 not found: ID does not exist" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.033911 4758 scope.go:117] "RemoveContainer" containerID="7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" Jan 30 08:52:13 crc kubenswrapper[4758]: E0130 08:52:13.034263 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e\": container with ID starting with 7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e not found: ID does not exist" containerID="7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.034292 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e"} err="failed to get container status \"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e\": rpc error: code = NotFound desc = could not find container \"7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e\": container with ID starting with 7d0c2966bfa31094f2e7dd0c554acc45d626f6c8b3ae3f959989eaa425e18f9e not found: ID does not exist" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.101155 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.128706 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.129540 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.129648 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.129757 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.129862 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.132367 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.132535 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234489 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234562 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234638 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234690 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234733 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234761 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.234782 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.235496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.235664 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.241021 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.245923 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.254827 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.256956 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.261948 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") pod \"ceilometer-0\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.334247 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.782007 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8782f55b-f6c6-4fa1-bf56-1cea92cac9c9" path="/var/lib/kubelet/pods/8782f55b-f6c6-4fa1-bf56-1cea92cac9c9/volumes" Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.819932 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:13 crc kubenswrapper[4758]: I0130 08:52:13.863417 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"740c53299b30df6a094dca8b4457e862311172f260474801e83e635d2a8a2502"} Jan 30 08:52:14 crc kubenswrapper[4758]: I0130 08:52:14.873717 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7"} Jan 30 08:52:15 crc kubenswrapper[4758]: I0130 08:52:15.885985 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4"} Jan 30 08:52:16 crc kubenswrapper[4758]: E0130 08:52:16.200569 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:16 crc kubenswrapper[4758]: E0130 08:52:16.205708 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:16 crc kubenswrapper[4758]: E0130 08:52:16.216605 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:16 crc kubenswrapper[4758]: E0130 08:52:16.216677 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.898904 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff"} Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.901421 4758 generic.go:334] "Generic (PLEG): container finished" podID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerID="a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150" exitCode=137 Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.901475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerDied","Data":"a1146f1ee627c842c00198864692549d9cec1a00d0f932a99928483d26346150"} Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.901502 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf698bb7b-gp87v" event={"ID":"97906db2-3b2d-44ec-af77-d3edf75b7f76","Type":"ContainerStarted","Data":"08a2a2ba15e166e2f950a796369d8a82583e326b34dda5dca164aed61eef649b"} Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.901517 4758 scope.go:117] "RemoveContainer" containerID="33c5db16903a0ea24fc43801695a658eca16a53583c2497012de7812e4d3cb27" Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.928983 4758 generic.go:334] "Generic (PLEG): container finished" podID="365b123c-aa7f-464d-b659-78154f86d42f" containerID="e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0" exitCode=137 Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.929055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0"} Jan 30 08:52:16 crc kubenswrapper[4758]: I0130 08:52:16.929098 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerStarted","Data":"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b"} Jan 30 08:52:17 crc kubenswrapper[4758]: I0130 08:52:17.126369 4758 scope.go:117] "RemoveContainer" containerID="5ac24c2e0d10a94520a56718b5cb7e279d3222a4776ef2910ce318c3d652e18a" Jan 30 08:52:18 crc kubenswrapper[4758]: I0130 08:52:18.958467 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerStarted","Data":"ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7"} Jan 30 08:52:18 crc kubenswrapper[4758]: I0130 08:52:18.959137 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:52:18 crc kubenswrapper[4758]: I0130 08:52:18.986762 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.115802566 podStartE2EDuration="6.986733195s" podCreationTimestamp="2026-01-30 08:52:12 +0000 UTC" firstStartedPulling="2026-01-30 08:52:13.829860847 +0000 UTC m=+1338.802172398" lastFinishedPulling="2026-01-30 08:52:17.700791476 +0000 UTC m=+1342.673103027" observedRunningTime="2026-01-30 08:52:18.982549564 +0000 UTC m=+1343.954861115" watchObservedRunningTime="2026-01-30 08:52:18.986733195 +0000 UTC m=+1343.959044746" Jan 30 08:52:20 crc kubenswrapper[4758]: E0130 08:52:20.406144 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="f978baf9-b7c0-4d25-8bca-e95a018ba2af" Jan 30 08:52:20 crc kubenswrapper[4758]: I0130 08:52:20.977414 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:52:21 crc kubenswrapper[4758]: E0130 08:52:21.183226 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:21 crc kubenswrapper[4758]: E0130 08:52:21.184583 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:21 crc kubenswrapper[4758]: E0130 08:52:21.196271 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:21 crc kubenswrapper[4758]: E0130 08:52:21.196383 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.388614 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.389054 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.389116 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.390195 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.390262 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2" gracePeriod=600 Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.997018 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2" exitCode=0 Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.997437 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2"} Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.997474 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275"} Jan 30 08:52:22 crc kubenswrapper[4758]: I0130 08:52:22.997496 4758 scope.go:117] "RemoveContainer" containerID="30f01f1e437d64a8924bada24b4f475f6ae60692a25492cb418ecb9bb3c281c2" Jan 30 08:52:24 crc kubenswrapper[4758]: I0130 08:52:24.067383 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:52:24 crc kubenswrapper[4758]: E0130 08:52:24.067638 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:52:24 crc kubenswrapper[4758]: E0130 08:52:24.067952 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:52:24 crc kubenswrapper[4758]: E0130 08:52:24.068029 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:54:26.068007338 +0000 UTC m=+1471.040318899 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.009841 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.010190 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.010608 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.043107 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.044248 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:52:26 crc kubenswrapper[4758]: I0130 08:52:26.045904 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:52:26 crc kubenswrapper[4758]: E0130 08:52:26.183242 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:26 crc kubenswrapper[4758]: E0130 08:52:26.192613 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:26 crc kubenswrapper[4758]: E0130 08:52:26.194168 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:26 crc kubenswrapper[4758]: E0130 08:52:26.194232 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:31 crc kubenswrapper[4758]: E0130 08:52:31.184905 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:31 crc kubenswrapper[4758]: E0130 08:52:31.206030 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:31 crc kubenswrapper[4758]: E0130 08:52:31.225748 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:31 crc kubenswrapper[4758]: E0130 08:52:31.225869 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:36 crc kubenswrapper[4758]: I0130 08:52:36.019180 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:52:36 crc kubenswrapper[4758]: I0130 08:52:36.043709 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 30 08:52:36 crc kubenswrapper[4758]: E0130 08:52:36.184115 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:36 crc kubenswrapper[4758]: E0130 08:52:36.185911 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:36 crc kubenswrapper[4758]: E0130 08:52:36.188503 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 08:52:36 crc kubenswrapper[4758]: E0130 08:52:36.188566 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.165863 4758 generic.go:334] "Generic (PLEG): container finished" podID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" exitCode=137 Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.165950 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"35f68976-d7c9-453e-a02c-cd4119a55e3b","Type":"ContainerDied","Data":"57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e"} Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.794776 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.879110 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") pod \"35f68976-d7c9-453e-a02c-cd4119a55e3b\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.879348 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") pod \"35f68976-d7c9-453e-a02c-cd4119a55e3b\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.879380 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") pod \"35f68976-d7c9-453e-a02c-cd4119a55e3b\" (UID: \"35f68976-d7c9-453e-a02c-cd4119a55e3b\") " Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.890301 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2" (OuterVolumeSpecName: "kube-api-access-hg4s2") pod "35f68976-d7c9-453e-a02c-cd4119a55e3b" (UID: "35f68976-d7c9-453e-a02c-cd4119a55e3b"). InnerVolumeSpecName "kube-api-access-hg4s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.918777 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data" (OuterVolumeSpecName: "config-data") pod "35f68976-d7c9-453e-a02c-cd4119a55e3b" (UID: "35f68976-d7c9-453e-a02c-cd4119a55e3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.920333 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35f68976-d7c9-453e-a02c-cd4119a55e3b" (UID: "35f68976-d7c9-453e-a02c-cd4119a55e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.981844 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg4s2\" (UniqueName: \"kubernetes.io/projected/35f68976-d7c9-453e-a02c-cd4119a55e3b-kube-api-access-hg4s2\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.982402 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:39 crc kubenswrapper[4758]: I0130 08:52:39.982414 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f68976-d7c9-453e-a02c-cd4119a55e3b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.177154 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.177147 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"35f68976-d7c9-453e-a02c-cd4119a55e3b","Type":"ContainerDied","Data":"d7166f91e2c58a7c0393d358a9d02c9055600e84476d26b20a2d3fae382bf66f"} Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.177909 4758 scope.go:117] "RemoveContainer" containerID="57e221474e204794bd730825868693a047c143c54c71c3b1b792b3c01b2fcc8e" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.229304 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.239621 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.262718 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:40 crc kubenswrapper[4758]: E0130 08:52:40.263276 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.263296 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.263533 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" containerName="nova-cell0-conductor-conductor" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.264395 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.268957 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.271158 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wlml7" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.298049 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.394390 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.394516 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frv6z\" (UniqueName: \"kubernetes.io/projected/d5c8d4f1-2007-458e-a918-35eea3933622-kube-api-access-frv6z\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.394606 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.497175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.497323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frv6z\" (UniqueName: \"kubernetes.io/projected/d5c8d4f1-2007-458e-a918-35eea3933622-kube-api-access-frv6z\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.497396 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.503073 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.510918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5c8d4f1-2007-458e-a918-35eea3933622-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.530253 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frv6z\" (UniqueName: \"kubernetes.io/projected/d5c8d4f1-2007-458e-a918-35eea3933622-kube-api-access-frv6z\") pod \"nova-cell0-conductor-0\" (UID: \"d5c8d4f1-2007-458e-a918-35eea3933622\") " pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:40 crc kubenswrapper[4758]: I0130 08:52:40.585177 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:41 crc kubenswrapper[4758]: W0130 08:52:41.459725 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5c8d4f1_2007_458e_a918_35eea3933622.slice/crio-b03f328b226d9a2cb18aafa025c241ed77fb311ea4dd0a9767b7308a1e26096f WatchSource:0}: Error finding container b03f328b226d9a2cb18aafa025c241ed77fb311ea4dd0a9767b7308a1e26096f: Status 404 returned error can't find the container with id b03f328b226d9a2cb18aafa025c241ed77fb311ea4dd0a9767b7308a1e26096f Jan 30 08:52:41 crc kubenswrapper[4758]: I0130 08:52:41.466867 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 08:52:41 crc kubenswrapper[4758]: I0130 08:52:41.779465 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f68976-d7c9-453e-a02c-cd4119a55e3b" path="/var/lib/kubelet/pods/35f68976-d7c9-453e-a02c-cd4119a55e3b/volumes" Jan 30 08:52:42 crc kubenswrapper[4758]: I0130 08:52:42.206024 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d5c8d4f1-2007-458e-a918-35eea3933622","Type":"ContainerStarted","Data":"7d3208a4a87c7be40628ac1af32d22786c33021117db6335d3b791365110b969"} Jan 30 08:52:42 crc kubenswrapper[4758]: I0130 08:52:42.206132 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d5c8d4f1-2007-458e-a918-35eea3933622","Type":"ContainerStarted","Data":"b03f328b226d9a2cb18aafa025c241ed77fb311ea4dd0a9767b7308a1e26096f"} Jan 30 08:52:42 crc kubenswrapper[4758]: I0130 08:52:42.206168 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:42 crc kubenswrapper[4758]: I0130 08:52:42.224887 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.224869144 podStartE2EDuration="2.224869144s" podCreationTimestamp="2026-01-30 08:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:42.222729227 +0000 UTC m=+1367.195040788" watchObservedRunningTime="2026-01-30 08:52:42.224869144 +0000 UTC m=+1367.197180685" Jan 30 08:52:43 crc kubenswrapper[4758]: I0130 08:52:43.342160 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:52:47 crc kubenswrapper[4758]: I0130 08:52:47.527281 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:47 crc kubenswrapper[4758]: I0130 08:52:47.528084 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerName="kube-state-metrics" containerID="cri-o://a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" gracePeriod=30 Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.166609 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256089 4758 generic.go:334] "Generic (PLEG): container finished" podID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerID="a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" exitCode=2 Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256139 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6","Type":"ContainerDied","Data":"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9"} Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256175 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6","Type":"ContainerDied","Data":"d0e1a0f857fe6a3aca4ee7d7173a8d5441df3a299b48556a5a5526d0a92355f7"} Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256193 4758 scope.go:117] "RemoveContainer" containerID="a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.256325 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.264244 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") pod \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\" (UID: \"cbb296cf-3469-43c3-9ebe-8fd1d31c00a6\") " Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.294928 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952" (OuterVolumeSpecName: "kube-api-access-5c952") pod "cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" (UID: "cbb296cf-3469-43c3-9ebe-8fd1d31c00a6"). InnerVolumeSpecName "kube-api-access-5c952". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.309190 4758 scope.go:117] "RemoveContainer" containerID="a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" Jan 30 08:52:48 crc kubenswrapper[4758]: E0130 08:52:48.310831 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9\": container with ID starting with a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9 not found: ID does not exist" containerID="a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.310877 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9"} err="failed to get container status \"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9\": rpc error: code = NotFound desc = could not find container \"a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9\": container with ID starting with a03cb8c668a7603805727e8c9ed1db6d1febfb4863ba92411a52253f1c6f19d9 not found: ID does not exist" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.367247 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c952\" (UniqueName: \"kubernetes.io/projected/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6-kube-api-access-5c952\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.591350 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.605274 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.617499 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:48 crc kubenswrapper[4758]: E0130 08:52:48.617892 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerName="kube-state-metrics" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.617914 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerName="kube-state-metrics" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.618129 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" containerName="kube-state-metrics" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.618721 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.620842 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.632504 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.721278 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.773336 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.773448 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.773490 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.773653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-api-access-c94f4\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.875675 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.875799 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.875843 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.875906 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-api-access-c94f4\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.880050 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.880559 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.886452 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.900584 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/402b7d3e-d66f-412a-a3a8-4c45a9a47628-kube-api-access-c94f4\") pod \"kube-state-metrics-0\" (UID: \"402b7d3e-d66f-412a-a3a8-4c45a9a47628\") " pod="openstack/kube-state-metrics-0" Jan 30 08:52:48 crc kubenswrapper[4758]: I0130 08:52:48.936128 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 08:52:49 crc kubenswrapper[4758]: I0130 08:52:49.440027 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 08:52:49 crc kubenswrapper[4758]: I0130 08:52:49.778161 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb296cf-3469-43c3-9ebe-8fd1d31c00a6" path="/var/lib/kubelet/pods/cbb296cf-3469-43c3-9ebe-8fd1d31c00a6/volumes" Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.002806 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.003423 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-central-agent" containerID="cri-o://4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7" gracePeriod=30 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.003551 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="proxy-httpd" containerID="cri-o://ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7" gracePeriod=30 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.003644 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-notification-agent" containerID="cri-o://98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4" gracePeriod=30 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.003782 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="sg-core" containerID="cri-o://aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff" gracePeriod=30 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.309185 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3078c00-f908-4759-86c3-ce6109c669c9" containerID="ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7" exitCode=0 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.309227 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3078c00-f908-4759-86c3-ce6109c669c9" containerID="aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff" exitCode=2 Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.309278 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7"} Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.309311 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff"} Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.312533 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"402b7d3e-d66f-412a-a3a8-4c45a9a47628","Type":"ContainerStarted","Data":"a04a59d8b62c3252a6690199f8633042a43e4a9415c8b3ff8e67be1189cc7df5"} Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.614814 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.689288 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:52:50 crc kubenswrapper[4758]: I0130 08:52:50.711427 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.323212 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"402b7d3e-d66f-412a-a3a8-4c45a9a47628","Type":"ContainerStarted","Data":"4714c8bdf10f0594c4277ef50d8cc0f86fd1275aa644540ed93918cd31f79fb4"} Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.323320 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.326674 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3078c00-f908-4759-86c3-ce6109c669c9" containerID="4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7" exitCode=0 Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.326725 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7"} Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.360680 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.633367436 podStartE2EDuration="3.360658289s" podCreationTimestamp="2026-01-30 08:52:48 +0000 UTC" firstStartedPulling="2026-01-30 08:52:49.463334023 +0000 UTC m=+1374.435645574" lastFinishedPulling="2026-01-30 08:52:50.190624876 +0000 UTC m=+1375.162936427" observedRunningTime="2026-01-30 08:52:51.345084982 +0000 UTC m=+1376.317396553" watchObservedRunningTime="2026-01-30 08:52:51.360658289 +0000 UTC m=+1376.332969840" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.475382 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.476624 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.482212 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.484174 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.494234 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.636181 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.636265 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.636286 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.636343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.737828 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.737908 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.737926 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.738367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.743799 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.749742 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.768688 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.847236 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") pod \"nova-cell0-cell-mapping-xp8d6\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.927402 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.928584 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.957451 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.968678 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:52:51 crc kubenswrapper[4758]: I0130 08:52:51.970455 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: W0130 08:52:52.028352 4758 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: secrets "nova-api-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 08:52:52 crc kubenswrapper[4758]: E0130 08:52:52.028590 4758 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-api-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.029877 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.043955 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.044113 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.044135 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.092692 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.097485 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146106 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146164 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146239 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146258 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146329 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.146362 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.169435 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.186494 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.245893 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") pod \"nova-scheduler-0\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.246353 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.248953 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.249020 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.249164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.249199 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.249868 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.257766 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.325403 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.395927 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.401513 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.410251 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.440983 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.558360 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.558443 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.558472 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.672137 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.672574 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.672600 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.684998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.703755 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.711881 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.746650 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.863487 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.869636 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.885876 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.889015 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.889169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.899215 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.899529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.934031 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.955114 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:52:52 crc kubenswrapper[4758]: I0130 08:52:52.992509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") pod \"nova-api-0\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " pod="openstack/nova-api-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.011610 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.013157 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.013348 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.019676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.014325 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.044566 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.066514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.068359 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") pod \"nova-metadata-0\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.102649 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 08:52:53 crc kubenswrapper[4758]: W0130 08:52:53.148537 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c7907da_98f4_46c2_9089_7227516cf739.slice/crio-9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24 WatchSource:0}: Error finding container 9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24: Status 404 returned error can't find the container with id 9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24 Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.186624 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.234553 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.299536 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.301910 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.369940 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.370412 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.370560 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.370639 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.370754 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.409429 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.573823 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.574776 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.574888 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.574980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.575091 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.588534 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.590761 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.600926 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.603024 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.612663 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xp8d6" event={"ID":"9c7907da-98f4-46c2-9089-7227516cf739","Type":"ContainerStarted","Data":"9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24"} Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.616328 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") pod \"dnsmasq-dns-7c6ccb6797-l4vl5\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.620669 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.621743 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:52:53 crc kubenswrapper[4758]: I0130 08:52:53.937009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.197632 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.237731 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.481156 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.484008 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.491963 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.495675 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.498487 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.644416 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.644478 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.644510 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.644557 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.649185 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.654852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerStarted","Data":"0554dd0e1bf67b74e082f84d57d630b9e35218363af5e1129b9eccdc1687b263"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.663387 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7fbb8129-e72f-4925-a38a-06505fe53fb3","Type":"ContainerStarted","Data":"f59aee87045b6560ae771d39cafdf29ecf7529163d23a6f08629ad7171493e30"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.673598 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerStarted","Data":"2c2f12b4fc7394d7f62cb7f9a1ba9314155b34b91f9d418d8afd7c811d330122"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.689496 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3078c00-f908-4759-86c3-ce6109c669c9" containerID="98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4" exitCode=0 Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.689573 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.690895 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b4a1798-01a8-4e2f-8c93-ee4053777f75","Type":"ContainerStarted","Data":"0facd9f58ad103e6d6bf1af4bf8a03e376d0d7dc402e1ef4b1234121e0c87d41"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.696566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xp8d6" event={"ID":"9c7907da-98f4-46c2-9089-7227516cf739","Type":"ContainerStarted","Data":"87c0147985c0330380f0c62d6ffa802a17a1d7f52af945bd4b80ccd53693d21e"} Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.728944 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xp8d6" podStartSLOduration=3.728910153 podStartE2EDuration="3.728910153s" podCreationTimestamp="2026-01-30 08:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:54.723517694 +0000 UTC m=+1379.695829245" watchObservedRunningTime="2026-01-30 08:52:54.728910153 +0000 UTC m=+1379.701221704" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.748263 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.748331 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.748375 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.748404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.762013 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.762907 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.763410 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.772936 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") pod \"nova-cell1-conductor-db-sync-95vnq\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.829307 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.840230 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.951987 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952122 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952166 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952218 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952452 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952517 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.952585 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") pod \"e3078c00-f908-4759-86c3-ce6109c669c9\" (UID: \"e3078c00-f908-4759-86c3-ce6109c669c9\") " Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.954913 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.959154 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:52:54 crc kubenswrapper[4758]: I0130 08:52:54.984907 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg" (OuterVolumeSpecName: "kube-api-access-mpqsg") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "kube-api-access-mpqsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.011504 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts" (OuterVolumeSpecName: "scripts") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.062251 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpqsg\" (UniqueName: \"kubernetes.io/projected/e3078c00-f908-4759-86c3-ce6109c669c9-kube-api-access-mpqsg\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.062534 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.062651 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.064155 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3078c00-f908-4759-86c3-ce6109c669c9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.072064 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.167767 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.359242 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data" (OuterVolumeSpecName: "config-data") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.373342 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.416964 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3078c00-f908-4759-86c3-ce6109c669c9" (UID: "e3078c00-f908-4759-86c3-ce6109c669c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.483424 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3078c00-f908-4759-86c3-ce6109c669c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.496419 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.663327 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.696281 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5cf698bb7b-gp87v" podUID="97906db2-3b2d-44ec-af77-d3edf75b7f76" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.773528 4758 generic.go:334] "Generic (PLEG): container finished" podID="88205fec-5592-41f1-a351-daf34b97add7" containerID="a9b45fc847ddf846195914d0ffbe6a09a108519311beafcfe30dbeedd9e7ad3b" exitCode=0 Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.773668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerDied","Data":"a9b45fc847ddf846195914d0ffbe6a09a108519311beafcfe30dbeedd9e7ad3b"} Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.773719 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerStarted","Data":"44f8b069d0be92a8bc294a5f42966c64f273339f26c92d95f7dfd0fd9149bca0"} Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.849779 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.874451 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3078c00-f908-4759-86c3-ce6109c669c9","Type":"ContainerDied","Data":"740c53299b30df6a094dca8b4457e862311172f260474801e83e635d2a8a2502"} Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.874494 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95vnq" event={"ID":"42581858-ead1-4898-9fad-72411bf3c6a4","Type":"ContainerStarted","Data":"4f43893ac45e4922957242a9db4708b2364bce5aae74e0b958012018e45639fe"} Jan 30 08:52:55 crc kubenswrapper[4758]: I0130 08:52:55.874529 4758 scope.go:117] "RemoveContainer" containerID="ba5c3e6a8e07b0dd57a9e4b2dce16091167ff8d6309149424a387d5848c250d7" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.019670 4758 scope.go:117] "RemoveContainer" containerID="aa56c2aceef9856376d1db7b6df91462c755abb71029db070b163f5d024efaff" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.061350 4758 scope.go:117] "RemoveContainer" containerID="98a7d35980c739006609bb2df4d25cb95044049a6931e6f8f91cb240bd06c8a4" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.070626 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.092274 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.102932 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:56 crc kubenswrapper[4758]: E0130 08:52:56.103501 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-notification-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103521 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-notification-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: E0130 08:52:56.103539 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-central-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103547 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-central-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: E0130 08:52:56.103575 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="proxy-httpd" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103585 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="proxy-httpd" Jan 30 08:52:56 crc kubenswrapper[4758]: E0130 08:52:56.103652 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="sg-core" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103663 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="sg-core" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103904 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-notification-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103927 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="sg-core" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103937 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="ceilometer-central-agent" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.103963 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" containerName="proxy-httpd" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.111963 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.115272 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.120740 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.121020 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.121188 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.135255 4758 scope.go:117] "RemoveContainer" containerID="4a1277446a2960bf740c100d86ca47dbf20eb3e4bde61a0d988583f9b578d9e7" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215030 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215719 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215834 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.215951 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.216255 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.216415 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.216562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319297 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319332 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319390 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319455 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319478 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319492 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.319520 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.324371 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.325290 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.325918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.326195 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.328112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.337317 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.343366 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.348872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") pod \"ceilometer-0\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.563038 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.890359 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95vnq" event={"ID":"42581858-ead1-4898-9fad-72411bf3c6a4","Type":"ContainerStarted","Data":"83928b37f6975882ba286ca1fad93e1bc51d5667bdaabba8b964bdb9f2dfdcd4"} Jan 30 08:52:56 crc kubenswrapper[4758]: I0130 08:52:56.964828 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-95vnq" podStartSLOduration=2.964807751 podStartE2EDuration="2.964807751s" podCreationTimestamp="2026-01-30 08:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:56.956564753 +0000 UTC m=+1381.928876324" watchObservedRunningTime="2026-01-30 08:52:56.964807751 +0000 UTC m=+1381.937119302" Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.550253 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.788331 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3078c00-f908-4759-86c3-ce6109c669c9" path="/var/lib/kubelet/pods/e3078c00-f908-4759-86c3-ce6109c669c9/volumes" Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.933025 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerStarted","Data":"dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995"} Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.933188 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:52:57 crc kubenswrapper[4758]: I0130 08:52:57.969678 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" podStartSLOduration=4.969640386 podStartE2EDuration="4.969640386s" podCreationTimestamp="2026-01-30 08:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:52:57.955731951 +0000 UTC m=+1382.928043502" watchObservedRunningTime="2026-01-30 08:52:57.969640386 +0000 UTC m=+1382.941951947" Jan 30 08:52:58 crc kubenswrapper[4758]: I0130 08:52:58.069095 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:52:58 crc kubenswrapper[4758]: I0130 08:52:58.093014 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:52:58 crc kubenswrapper[4758]: I0130 08:52:58.992197 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 08:53:00 crc kubenswrapper[4758]: W0130 08:53:00.307249 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda23235c6_1a6b_42cb_a434_08b7e3555915.slice/crio-b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521 WatchSource:0}: Error finding container b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521: Status 404 returned error can't find the container with id b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521 Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.780683 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5cf698bb7b-gp87v" Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.852486 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.852817 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon-log" containerID="cri-o://928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" gracePeriod=30 Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.852984 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" containerID="cri-o://f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" gracePeriod=30 Jan 30 08:53:00 crc kubenswrapper[4758]: E0130 08:53:00.870992 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-75f5775999-fhl5h" podUID="c2358e5c-db98-4b7b-8b6c-2e83132655a9" Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.989973 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:53:00 crc kubenswrapper[4758]: I0130 08:53:00.989990 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521"} Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.481453 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.484172 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.512249 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.587402 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.587503 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.587547 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.688737 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.688819 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.688936 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.689349 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.689396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.710075 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") pod \"redhat-operators-zpwcm\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:02 crc kubenswrapper[4758]: I0130 08:53:02.846395 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:03 crc kubenswrapper[4758]: I0130 08:53:03.623382 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:53:03 crc kubenswrapper[4758]: I0130 08:53:03.693502 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:53:03 crc kubenswrapper[4758]: I0130 08:53:03.693954 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="dnsmasq-dns" containerID="cri-o://516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642" gracePeriod=10 Jan 30 08:53:04 crc kubenswrapper[4758]: I0130 08:53:04.022099 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerID="516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642" exitCode=0 Jan 30 08:53:04 crc kubenswrapper[4758]: I0130 08:53:04.022381 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerDied","Data":"516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642"} Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.070507 4758 generic.go:334] "Generic (PLEG): container finished" podID="365b123c-aa7f-464d-b659-78154f86d42f" containerID="f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" exitCode=0 Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.070778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b"} Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.070814 4758 scope.go:117] "RemoveContainer" containerID="e8e2a20c9ea4e39eefa6de1113690b21340a717486562346be315bf42b27f2b0" Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.535364 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.657772 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.657903 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.657967 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.658127 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.658553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") pod \"7f24b01a-1d08-4fcc-9bbc-591644e40964\" (UID: \"7f24b01a-1d08-4fcc-9bbc-591644e40964\") " Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.714309 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px" (OuterVolumeSpecName: "kube-api-access-c65px") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "kube-api-access-c65px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.763545 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c65px\" (UniqueName: \"kubernetes.io/projected/7f24b01a-1d08-4fcc-9bbc-591644e40964-kube-api-access-c65px\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.764911 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:05 crc kubenswrapper[4758]: I0130 08:53:05.866831 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:53:05 crc kubenswrapper[4758]: E0130 08:53:05.868887 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:53:05 crc kubenswrapper[4758]: E0130 08:53:05.868915 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-75f5775999-fhl5h: configmap "swift-ring-files" not found Jan 30 08:53:05 crc kubenswrapper[4758]: E0130 08:53:05.868971 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift podName:c2358e5c-db98-4b7b-8b6c-2e83132655a9 nodeName:}" failed. No retries permitted until 2026-01-30 08:55:07.868946679 +0000 UTC m=+1512.841258230 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift") pod "swift-proxy-75f5775999-fhl5h" (UID: "c2358e5c-db98-4b7b-8b6c-2e83132655a9") : configmap "swift-ring-files" not found Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.010944 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.043571 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.071937 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.110180 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" gracePeriod=30 Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.137641 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.148305 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.18498107 podStartE2EDuration="14.148284118s" podCreationTimestamp="2026-01-30 08:52:52 +0000 UTC" firstStartedPulling="2026-01-30 08:52:53.955673552 +0000 UTC m=+1378.927985103" lastFinishedPulling="2026-01-30 08:53:04.9189766 +0000 UTC m=+1389.891288151" observedRunningTime="2026-01-30 08:53:06.144642994 +0000 UTC m=+1391.116954555" watchObservedRunningTime="2026-01-30 08:53:06.148284118 +0000 UTC m=+1391.120595669" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.155896 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.174088 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.179415 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.845990436 podStartE2EDuration="15.179388291s" podCreationTimestamp="2026-01-30 08:52:51 +0000 UTC" firstStartedPulling="2026-01-30 08:52:53.513813879 +0000 UTC m=+1378.486125430" lastFinishedPulling="2026-01-30 08:53:04.847211734 +0000 UTC m=+1389.819523285" observedRunningTime="2026-01-30 08:53:06.177799351 +0000 UTC m=+1391.150110912" watchObservedRunningTime="2026-01-30 08:53:06.179388291 +0000 UTC m=+1391.151699842" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.218393 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config" (OuterVolumeSpecName: "config") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.253671 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f24b01a-1d08-4fcc-9bbc-591644e40964" (UID: "7f24b01a-1d08-4fcc-9bbc-591644e40964"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.276794 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.277072 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f24b01a-1d08-4fcc-9bbc-591644e40964-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.392692 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b4a1798-01a8-4e2f-8c93-ee4053777f75","Type":"ContainerStarted","Data":"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393014 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7fbb8129-e72f-4925-a38a-06505fe53fb3","Type":"ContainerStarted","Data":"63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393239 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerStarted","Data":"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393345 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerStarted","Data":"126ded88ff78aed69c62e79ace5301ae74868b1dd3131b082272b1efab898826"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393450 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c9c856fc-frscq" event={"ID":"7f24b01a-1d08-4fcc-9bbc-591644e40964","Type":"ContainerDied","Data":"6d3908fc7a3657c2e0650ef09a082b68d8f4744d5e8a7add60e40d22f8b4f455"} Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.393547 4758 scope.go:117] "RemoveContainer" containerID="516401664f75cbce8e0c6bf0b65e1672368441d65029374151387707c2397642" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.475588 4758 scope.go:117] "RemoveContainer" containerID="0fad6fa55bd6734af8e520f919028effe9343fb8ce554bc0f6cdb0c2e748ee42" Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.529653 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:53:06 crc kubenswrapper[4758]: I0130 08:53:06.569016 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77c9c856fc-frscq"] Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.163621 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerStarted","Data":"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.178754 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerStarted","Data":"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.178824 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerStarted","Data":"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.196293 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.213458 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-log" containerID="cri-o://508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" gracePeriod=30 Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.213542 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerStarted","Data":"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa"} Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.213609 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-metadata" containerID="cri-o://20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" gracePeriod=30 Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.238524 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.437838835 podStartE2EDuration="16.238500984s" podCreationTimestamp="2026-01-30 08:52:51 +0000 UTC" firstStartedPulling="2026-01-30 08:52:54.200560783 +0000 UTC m=+1379.172872334" lastFinishedPulling="2026-01-30 08:53:05.001222922 +0000 UTC m=+1389.973534483" observedRunningTime="2026-01-30 08:53:07.233178367 +0000 UTC m=+1392.205489938" watchObservedRunningTime="2026-01-30 08:53:07.238500984 +0000 UTC m=+1392.210812535" Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.247013 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.267296 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.5587698880000005 podStartE2EDuration="15.267264354s" podCreationTimestamp="2026-01-30 08:52:52 +0000 UTC" firstStartedPulling="2026-01-30 08:52:54.243124335 +0000 UTC m=+1379.215435886" lastFinishedPulling="2026-01-30 08:53:04.951618801 +0000 UTC m=+1389.923930352" observedRunningTime="2026-01-30 08:53:07.261103621 +0000 UTC m=+1392.233415172" watchObservedRunningTime="2026-01-30 08:53:07.267264354 +0000 UTC m=+1392.239575915" Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.747671 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:07 crc kubenswrapper[4758]: I0130 08:53:07.784212 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" path="/var/lib/kubelet/pods/7f24b01a-1d08-4fcc-9bbc-591644e40964/volumes" Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.230221 4758 generic.go:334] "Generic (PLEG): container finished" podID="c423a7ce-2dda-478f-8579-641954eee4a9" containerID="bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307" exitCode=0 Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.230462 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerDied","Data":"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307"} Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.239664 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9"} Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.244006 4758 generic.go:334] "Generic (PLEG): container finished" podID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerID="508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" exitCode=143 Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.245200 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerDied","Data":"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0"} Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.246691 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:53:08 crc kubenswrapper[4758]: I0130 08:53:08.246739 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:53:09 crc kubenswrapper[4758]: I0130 08:53:09.256378 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c"} Jan 30 08:53:09 crc kubenswrapper[4758]: I0130 08:53:09.261570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerStarted","Data":"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f"} Jan 30 08:53:11 crc kubenswrapper[4758]: I0130 08:53:11.281890 4758 generic.go:334] "Generic (PLEG): container finished" podID="9c7907da-98f4-46c2-9089-7227516cf739" containerID="87c0147985c0330380f0c62d6ffa802a17a1d7f52af945bd4b80ccd53693d21e" exitCode=0 Jan 30 08:53:11 crc kubenswrapper[4758]: I0130 08:53:11.283811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xp8d6" event={"ID":"9c7907da-98f4-46c2-9089-7227516cf739","Type":"ContainerDied","Data":"87c0147985c0330380f0c62d6ffa802a17a1d7f52af945bd4b80ccd53693d21e"} Jan 30 08:53:12 crc kubenswrapper[4758]: I0130 08:53:12.247462 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 08:53:12 crc kubenswrapper[4758]: I0130 08:53:12.281585 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 08:53:12 crc kubenswrapper[4758]: I0130 08:53:12.333199 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.036138 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.163354 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") pod \"9c7907da-98f4-46c2-9089-7227516cf739\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.163557 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") pod \"9c7907da-98f4-46c2-9089-7227516cf739\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.163643 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") pod \"9c7907da-98f4-46c2-9089-7227516cf739\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.163705 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") pod \"9c7907da-98f4-46c2-9089-7227516cf739\" (UID: \"9c7907da-98f4-46c2-9089-7227516cf739\") " Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.173607 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts" (OuterVolumeSpecName: "scripts") pod "9c7907da-98f4-46c2-9089-7227516cf739" (UID: "9c7907da-98f4-46c2-9089-7227516cf739"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.187846 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.188350 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.201106 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m" (OuterVolumeSpecName: "kube-api-access-sfl8m") pod "9c7907da-98f4-46c2-9089-7227516cf739" (UID: "9c7907da-98f4-46c2-9089-7227516cf739"). InnerVolumeSpecName "kube-api-access-sfl8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.211433 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c7907da-98f4-46c2-9089-7227516cf739" (UID: "9c7907da-98f4-46c2-9089-7227516cf739"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.213480 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data" (OuterVolumeSpecName: "config-data") pod "9c7907da-98f4-46c2-9089-7227516cf739" (UID: "9c7907da-98f4-46c2-9089-7227516cf739"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.267245 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfl8m\" (UniqueName: \"kubernetes.io/projected/9c7907da-98f4-46c2-9089-7227516cf739-kube-api-access-sfl8m\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.267522 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.267667 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.267757 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c7907da-98f4-46c2-9089-7227516cf739-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.310926 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xp8d6" event={"ID":"9c7907da-98f4-46c2-9089-7227516cf739","Type":"ContainerDied","Data":"9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24"} Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.310984 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9662113a5e6c82a0672966fefbaee39c41d94a66b3bff675c82499c15dc32a24" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.310909 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xp8d6" Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.456616 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:13 crc kubenswrapper[4758]: I0130 08:53:13.479838 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.272303 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.272453 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.321419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerStarted","Data":"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6"} Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.321787 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerName="nova-scheduler-scheduler" containerID="cri-o://63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674" gracePeriod=30 Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.321988 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" containerID="cri-o://f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" gracePeriod=30 Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.322064 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" containerID="cri-o://462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" gracePeriod=30 Jan 30 08:53:14 crc kubenswrapper[4758]: I0130 08:53:14.366756 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.049155939 podStartE2EDuration="18.366730684s" podCreationTimestamp="2026-01-30 08:52:56 +0000 UTC" firstStartedPulling="2026-01-30 08:53:01.691504771 +0000 UTC m=+1386.663816322" lastFinishedPulling="2026-01-30 08:53:14.009079516 +0000 UTC m=+1398.981391067" observedRunningTime="2026-01-30 08:53:14.354620985 +0000 UTC m=+1399.326932526" watchObservedRunningTime="2026-01-30 08:53:14.366730684 +0000 UTC m=+1399.339042235" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.346851 4758 generic.go:334] "Generic (PLEG): container finished" podID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerID="f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" exitCode=143 Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.346915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerDied","Data":"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303"} Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.361765 4758 generic.go:334] "Generic (PLEG): container finished" podID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerID="63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674" exitCode=0 Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.361841 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7fbb8129-e72f-4925-a38a-06505fe53fb3","Type":"ContainerDied","Data":"63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674"} Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.387364 4758 generic.go:334] "Generic (PLEG): container finished" podID="c423a7ce-2dda-478f-8579-641954eee4a9" containerID="c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f" exitCode=0 Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.388413 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerDied","Data":"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f"} Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.388468 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.692965 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.820702 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") pod \"7fbb8129-e72f-4925-a38a-06505fe53fb3\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.820919 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") pod \"7fbb8129-e72f-4925-a38a-06505fe53fb3\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.820970 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") pod \"7fbb8129-e72f-4925-a38a-06505fe53fb3\" (UID: \"7fbb8129-e72f-4925-a38a-06505fe53fb3\") " Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.831325 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr" (OuterVolumeSpecName: "kube-api-access-dglqr") pod "7fbb8129-e72f-4925-a38a-06505fe53fb3" (UID: "7fbb8129-e72f-4925-a38a-06505fe53fb3"). InnerVolumeSpecName "kube-api-access-dglqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.865940 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data" (OuterVolumeSpecName: "config-data") pod "7fbb8129-e72f-4925-a38a-06505fe53fb3" (UID: "7fbb8129-e72f-4925-a38a-06505fe53fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.894366 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fbb8129-e72f-4925-a38a-06505fe53fb3" (UID: "7fbb8129-e72f-4925-a38a-06505fe53fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.924754 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dglqr\" (UniqueName: \"kubernetes.io/projected/7fbb8129-e72f-4925-a38a-06505fe53fb3-kube-api-access-dglqr\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.924797 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:15 crc kubenswrapper[4758]: I0130 08:53:15.924805 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbb8129-e72f-4925-a38a-06505fe53fb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.010687 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.399225 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerStarted","Data":"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0"} Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.402549 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.402589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7fbb8129-e72f-4925-a38a-06505fe53fb3","Type":"ContainerDied","Data":"f59aee87045b6560ae771d39cafdf29ecf7529163d23a6f08629ad7171493e30"} Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.402625 4758 scope.go:117] "RemoveContainer" containerID="63a7009910a7bfc689262aa96451a1f94225bfd5fa271b9056b1473253b69674" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.429303 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zpwcm" podStartSLOduration=5.757055066 podStartE2EDuration="14.429278449s" podCreationTimestamp="2026-01-30 08:53:02 +0000 UTC" firstStartedPulling="2026-01-30 08:53:07.166666427 +0000 UTC m=+1392.138977978" lastFinishedPulling="2026-01-30 08:53:15.83888981 +0000 UTC m=+1400.811201361" observedRunningTime="2026-01-30 08:53:16.426034639 +0000 UTC m=+1401.398346220" watchObservedRunningTime="2026-01-30 08:53:16.429278449 +0000 UTC m=+1401.401590011" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.479100 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.494922 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.516770 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:16 crc kubenswrapper[4758]: E0130 08:53:16.517229 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="dnsmasq-dns" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517251 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="dnsmasq-dns" Jan 30 08:53:16 crc kubenswrapper[4758]: E0130 08:53:16.517274 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerName="nova-scheduler-scheduler" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517292 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerName="nova-scheduler-scheduler" Jan 30 08:53:16 crc kubenswrapper[4758]: E0130 08:53:16.517299 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="init" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517305 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="init" Jan 30 08:53:16 crc kubenswrapper[4758]: E0130 08:53:16.517323 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7907da-98f4-46c2-9089-7227516cf739" containerName="nova-manage" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517330 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7907da-98f4-46c2-9089-7227516cf739" containerName="nova-manage" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517513 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f24b01a-1d08-4fcc-9bbc-591644e40964" containerName="dnsmasq-dns" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517531 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7907da-98f4-46c2-9089-7227516cf739" containerName="nova-manage" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.517546 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" containerName="nova-scheduler-scheduler" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.518212 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.525016 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.589603 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.653438 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.653506 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.653575 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.755724 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.755795 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.755861 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.765224 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.773722 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.779809 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") pod \"nova-scheduler-0\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " pod="openstack/nova-scheduler-0" Jan 30 08:53:16 crc kubenswrapper[4758]: I0130 08:53:16.891080 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:53:17 crc kubenswrapper[4758]: I0130 08:53:17.559276 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:53:17 crc kubenswrapper[4758]: I0130 08:53:17.802247 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbb8129-e72f-4925-a38a-06505fe53fb3" path="/var/lib/kubelet/pods/7fbb8129-e72f-4925-a38a-06505fe53fb3/volumes" Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.428622 4758 generic.go:334] "Generic (PLEG): container finished" podID="42581858-ead1-4898-9fad-72411bf3c6a4" containerID="83928b37f6975882ba286ca1fad93e1bc51d5667bdaabba8b964bdb9f2dfdcd4" exitCode=0 Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.428960 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95vnq" event={"ID":"42581858-ead1-4898-9fad-72411bf3c6a4","Type":"ContainerDied","Data":"83928b37f6975882ba286ca1fad93e1bc51d5667bdaabba8b964bdb9f2dfdcd4"} Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.431082 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"015d0689-8130-4f19-bb79-866766c02c63","Type":"ContainerStarted","Data":"d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be"} Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.431125 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"015d0689-8130-4f19-bb79-866766c02c63","Type":"ContainerStarted","Data":"7df77a789ad68a7f771f50962b45656f81dbca922fc56cf93ed7fd78a2640d9a"} Jan 30 08:53:18 crc kubenswrapper[4758]: I0130 08:53:18.522788 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.522765724 podStartE2EDuration="2.522765724s" podCreationTimestamp="2026-01-30 08:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:18.516968282 +0000 UTC m=+1403.489279863" watchObservedRunningTime="2026-01-30 08:53:18.522765724 +0000 UTC m=+1403.495077275" Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.836773 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.959153 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") pod \"42581858-ead1-4898-9fad-72411bf3c6a4\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.959454 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") pod \"42581858-ead1-4898-9fad-72411bf3c6a4\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.959574 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") pod \"42581858-ead1-4898-9fad-72411bf3c6a4\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.959755 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") pod \"42581858-ead1-4898-9fad-72411bf3c6a4\" (UID: \"42581858-ead1-4898-9fad-72411bf3c6a4\") " Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.982371 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts" (OuterVolumeSpecName: "scripts") pod "42581858-ead1-4898-9fad-72411bf3c6a4" (UID: "42581858-ead1-4898-9fad-72411bf3c6a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.982541 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455" (OuterVolumeSpecName: "kube-api-access-kq455") pod "42581858-ead1-4898-9fad-72411bf3c6a4" (UID: "42581858-ead1-4898-9fad-72411bf3c6a4"). InnerVolumeSpecName "kube-api-access-kq455". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:19 crc kubenswrapper[4758]: I0130 08:53:19.995131 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42581858-ead1-4898-9fad-72411bf3c6a4" (UID: "42581858-ead1-4898-9fad-72411bf3c6a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.006660 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data" (OuterVolumeSpecName: "config-data") pod "42581858-ead1-4898-9fad-72411bf3c6a4" (UID: "42581858-ead1-4898-9fad-72411bf3c6a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.062409 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.062471 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.062484 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq455\" (UniqueName: \"kubernetes.io/projected/42581858-ead1-4898-9fad-72411bf3c6a4-kube-api-access-kq455\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.062493 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42581858-ead1-4898-9fad-72411bf3c6a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.451578 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-95vnq" event={"ID":"42581858-ead1-4898-9fad-72411bf3c6a4","Type":"ContainerDied","Data":"4f43893ac45e4922957242a9db4708b2364bce5aae74e0b958012018e45639fe"} Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.451613 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-95vnq" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.451631 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f43893ac45e4922957242a9db4708b2364bce5aae74e0b958012018e45639fe" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.618439 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:53:20 crc kubenswrapper[4758]: E0130 08:53:20.618961 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42581858-ead1-4898-9fad-72411bf3c6a4" containerName="nova-cell1-conductor-db-sync" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.618984 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="42581858-ead1-4898-9fad-72411bf3c6a4" containerName="nova-cell1-conductor-db-sync" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.619247 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="42581858-ead1-4898-9fad-72411bf3c6a4" containerName="nova-cell1-conductor-db-sync" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.620086 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.626677 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.637300 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.775743 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.775846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgrgm\" (UniqueName: \"kubernetes.io/projected/9de33ead-37a1-4675-bad7-79b672a0c954-kube-api-access-bgrgm\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.775923 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.877911 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.878014 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgrgm\" (UniqueName: \"kubernetes.io/projected/9de33ead-37a1-4675-bad7-79b672a0c954-kube-api-access-bgrgm\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.878141 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.884734 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.885749 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9de33ead-37a1-4675-bad7-79b672a0c954-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.901312 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgrgm\" (UniqueName: \"kubernetes.io/projected/9de33ead-37a1-4675-bad7-79b672a0c954-kube-api-access-bgrgm\") pod \"nova-cell1-conductor-0\" (UID: \"9de33ead-37a1-4675-bad7-79b672a0c954\") " pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:20 crc kubenswrapper[4758]: I0130 08:53:20.946880 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:21 crc kubenswrapper[4758]: I0130 08:53:21.492011 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 08:53:21 crc kubenswrapper[4758]: I0130 08:53:21.892019 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.485385 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9de33ead-37a1-4675-bad7-79b672a0c954","Type":"ContainerStarted","Data":"25dbab7f3a5fe899a55b3f80388fbfc8b188c0fc73d038179c36d02035a2c61b"} Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.485428 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9de33ead-37a1-4675-bad7-79b672a0c954","Type":"ContainerStarted","Data":"b66c18c03a83a4ec982c4df58490bed0ca3bdd9ce851d858c92431546587f12e"} Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.485522 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.508315 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.508290617 podStartE2EDuration="2.508290617s" podCreationTimestamp="2026-01-30 08:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:22.504401055 +0000 UTC m=+1407.476712596" watchObservedRunningTime="2026-01-30 08:53:22.508290617 +0000 UTC m=+1407.480602168" Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.847393 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:22 crc kubenswrapper[4758]: I0130 08:53:22.848676 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.186939 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.187337 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.253400 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.335952 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") pod \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.336070 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") pod \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.336245 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") pod \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.336328 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") pod \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\" (UID: \"a20097fd-7119-487f-8a17-6b1f7f0f5bc7\") " Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.336874 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs" (OuterVolumeSpecName: "logs") pod "a20097fd-7119-487f-8a17-6b1f7f0f5bc7" (UID: "a20097fd-7119-487f-8a17-6b1f7f0f5bc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.364314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5" (OuterVolumeSpecName: "kube-api-access-rwnt5") pod "a20097fd-7119-487f-8a17-6b1f7f0f5bc7" (UID: "a20097fd-7119-487f-8a17-6b1f7f0f5bc7"). InnerVolumeSpecName "kube-api-access-rwnt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.369844 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data" (OuterVolumeSpecName: "config-data") pod "a20097fd-7119-487f-8a17-6b1f7f0f5bc7" (UID: "a20097fd-7119-487f-8a17-6b1f7f0f5bc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.369928 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a20097fd-7119-487f-8a17-6b1f7f0f5bc7" (UID: "a20097fd-7119-487f-8a17-6b1f7f0f5bc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.438434 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.438691 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.438760 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.438822 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwnt5\" (UniqueName: \"kubernetes.io/projected/a20097fd-7119-487f-8a17-6b1f7f0f5bc7-kube-api-access-rwnt5\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.499512 4758 generic.go:334] "Generic (PLEG): container finished" podID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerID="462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" exitCode=0 Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.500221 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerDied","Data":"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7"} Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.500297 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a20097fd-7119-487f-8a17-6b1f7f0f5bc7","Type":"ContainerDied","Data":"0554dd0e1bf67b74e082f84d57d630b9e35218363af5e1129b9eccdc1687b263"} Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.500306 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.500325 4758 scope.go:117] "RemoveContainer" containerID="462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.546258 4758 scope.go:117] "RemoveContainer" containerID="f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.551601 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.575358 4758 scope.go:117] "RemoveContainer" containerID="462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" Jan 30 08:53:23 crc kubenswrapper[4758]: E0130 08:53:23.577302 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7\": container with ID starting with 462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7 not found: ID does not exist" containerID="462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.577378 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7"} err="failed to get container status \"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7\": rpc error: code = NotFound desc = could not find container \"462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7\": container with ID starting with 462282ba181fbd9479f30c84ba1c53fd5d974c0ed9b0039aabe9d86fc81fe3a7 not found: ID does not exist" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.577415 4758 scope.go:117] "RemoveContainer" containerID="f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" Jan 30 08:53:23 crc kubenswrapper[4758]: E0130 08:53:23.579097 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303\": container with ID starting with f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303 not found: ID does not exist" containerID="f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.579171 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303"} err="failed to get container status \"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303\": rpc error: code = NotFound desc = could not find container \"f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303\": container with ID starting with f23edd45ffc543e944bd41363e845e6761850ad74cea082f14d2bcead1c33303 not found: ID does not exist" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.584542 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.600881 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:23 crc kubenswrapper[4758]: E0130 08:53:23.601432 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.601454 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" Jan 30 08:53:23 crc kubenswrapper[4758]: E0130 08:53:23.601497 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.601504 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.601715 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-log" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.601754 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" containerName="nova-api-api" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.602991 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.606655 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.623812 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.751944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.752031 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.752253 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.752304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.791337 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20097fd-7119-487f-8a17-6b1f7f0f5bc7" path="/var/lib/kubelet/pods/a20097fd-7119-487f-8a17-6b1f7f0f5bc7/volumes" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.854453 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.855001 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.855174 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.855214 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.859900 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.862496 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.862838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.889679 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") pod \"nova-api-0\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " pod="openstack/nova-api-0" Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.897979 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zpwcm" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" probeResult="failure" output=< Jan 30 08:53:23 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:53:23 crc kubenswrapper[4758]: > Jan 30 08:53:23 crc kubenswrapper[4758]: I0130 08:53:23.925041 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:24 crc kubenswrapper[4758]: I0130 08:53:24.383498 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:24 crc kubenswrapper[4758]: I0130 08:53:24.511429 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerStarted","Data":"73e4290d9c9ecf7bf09bd3149f6a065ae76b3173eaddcd9c2fd875f062240283"} Jan 30 08:53:25 crc kubenswrapper[4758]: I0130 08:53:25.523943 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerStarted","Data":"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc"} Jan 30 08:53:25 crc kubenswrapper[4758]: I0130 08:53:25.524333 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerStarted","Data":"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de"} Jan 30 08:53:25 crc kubenswrapper[4758]: I0130 08:53:25.547466 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.547441644 podStartE2EDuration="2.547441644s" podCreationTimestamp="2026-01-30 08:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:25.540621271 +0000 UTC m=+1410.512932822" watchObservedRunningTime="2026-01-30 08:53:25.547441644 +0000 UTC m=+1410.519753215" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.010765 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76fc974bd8-4mnvj" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.011169 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.590379 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.892311 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 08:53:26 crc kubenswrapper[4758]: I0130 08:53:26.919373 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 08:53:27 crc kubenswrapper[4758]: I0130 08:53:27.573808 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:30.992180 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.341270 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409477 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409568 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409711 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409783 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409838 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.410476 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs" (OuterVolumeSpecName: "logs") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.409876 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.410767 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") pod \"365b123c-aa7f-464d-b659-78154f86d42f\" (UID: \"365b123c-aa7f-464d-b659-78154f86d42f\") " Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.412539 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365b123c-aa7f-464d-b659-78154f86d42f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.429601 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897" (OuterVolumeSpecName: "kube-api-access-wm897") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "kube-api-access-wm897". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.429764 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.459371 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.471335 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts" (OuterVolumeSpecName: "scripts") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.490115 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.498254 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data" (OuterVolumeSpecName: "config-data") pod "365b123c-aa7f-464d-b659-78154f86d42f" (UID: "365b123c-aa7f-464d-b659-78154f86d42f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514696 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514727 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514740 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm897\" (UniqueName: \"kubernetes.io/projected/365b123c-aa7f-464d-b659-78154f86d42f-kube-api-access-wm897\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514754 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514764 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/365b123c-aa7f-464d-b659-78154f86d42f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.514776 4758 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/365b123c-aa7f-464d-b659-78154f86d42f-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.583912 4758 generic.go:334] "Generic (PLEG): container finished" podID="365b123c-aa7f-464d-b659-78154f86d42f" containerID="928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" exitCode=137 Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.583963 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032"} Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.583992 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76fc974bd8-4mnvj" event={"ID":"365b123c-aa7f-464d-b659-78154f86d42f","Type":"ContainerDied","Data":"a0f4a512c849aa8b080efc203f40fc92f70f7018d35ed2e40b0a8bd341bb6b8b"} Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.584011 4758 scope.go:117] "RemoveContainer" containerID="f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.584402 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76fc974bd8-4mnvj" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.647339 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.657441 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76fc974bd8-4mnvj"] Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.785771 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365b123c-aa7f-464d-b659-78154f86d42f" path="/var/lib/kubelet/pods/365b123c-aa7f-464d-b659-78154f86d42f/volumes" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.787659 4758 scope.go:117] "RemoveContainer" containerID="928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.808257 4758 scope.go:117] "RemoveContainer" containerID="f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" Jan 30 08:53:31 crc kubenswrapper[4758]: E0130 08:53:31.808770 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b\": container with ID starting with f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b not found: ID does not exist" containerID="f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.808823 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b"} err="failed to get container status \"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b\": rpc error: code = NotFound desc = could not find container \"f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b\": container with ID starting with f2e67e856feae159cb385f13091e482f4040e7e9a00aabd18fcad3a583bafa2b not found: ID does not exist" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.808847 4758 scope.go:117] "RemoveContainer" containerID="928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" Jan 30 08:53:31 crc kubenswrapper[4758]: E0130 08:53:31.809508 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032\": container with ID starting with 928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032 not found: ID does not exist" containerID="928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032" Jan 30 08:53:31 crc kubenswrapper[4758]: I0130 08:53:31.809542 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032"} err="failed to get container status \"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032\": rpc error: code = NotFound desc = could not find container \"928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032\": container with ID starting with 928f645f84c83794b19178e7cd4be5326d115c1163780539b938594135af9032 not found: ID does not exist" Jan 30 08:53:32 crc kubenswrapper[4758]: I0130 08:53:32.890378 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:32 crc kubenswrapper[4758]: I0130 08:53:32.937183 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:33 crc kubenswrapper[4758]: I0130 08:53:33.884106 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:33 crc kubenswrapper[4758]: I0130 08:53:33.926334 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:53:33 crc kubenswrapper[4758]: I0130 08:53:33.926404 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:53:34 crc kubenswrapper[4758]: I0130 08:53:34.609668 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zpwcm" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" containerID="cri-o://f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" gracePeriod=2 Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.013650 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.013750 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.114221 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.188737 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") pod \"c423a7ce-2dda-478f-8579-641954eee4a9\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.189179 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") pod \"c423a7ce-2dda-478f-8579-641954eee4a9\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.189673 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") pod \"c423a7ce-2dda-478f-8579-641954eee4a9\" (UID: \"c423a7ce-2dda-478f-8579-641954eee4a9\") " Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.197854 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities" (OuterVolumeSpecName: "utilities") pod "c423a7ce-2dda-478f-8579-641954eee4a9" (UID: "c423a7ce-2dda-478f-8579-641954eee4a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.217396 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl" (OuterVolumeSpecName: "kube-api-access-cn6cl") pod "c423a7ce-2dda-478f-8579-641954eee4a9" (UID: "c423a7ce-2dda-478f-8579-641954eee4a9"). InnerVolumeSpecName "kube-api-access-cn6cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.292755 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.292838 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn6cl\" (UniqueName: \"kubernetes.io/projected/c423a7ce-2dda-478f-8579-641954eee4a9-kube-api-access-cn6cl\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.340960 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c423a7ce-2dda-478f-8579-641954eee4a9" (UID: "c423a7ce-2dda-478f-8579-641954eee4a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.395145 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c423a7ce-2dda-478f-8579-641954eee4a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623296 4758 generic.go:334] "Generic (PLEG): container finished" podID="c423a7ce-2dda-478f-8579-641954eee4a9" containerID="f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" exitCode=0 Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623347 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpwcm" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623359 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerDied","Data":"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0"} Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623424 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpwcm" event={"ID":"c423a7ce-2dda-478f-8579-641954eee4a9","Type":"ContainerDied","Data":"126ded88ff78aed69c62e79ace5301ae74868b1dd3131b082272b1efab898826"} Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.623446 4758 scope.go:117] "RemoveContainer" containerID="f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.649970 4758 scope.go:117] "RemoveContainer" containerID="c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.701502 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.703376 4758 scope.go:117] "RemoveContainer" containerID="bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.722238 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zpwcm"] Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.729672 4758 scope.go:117] "RemoveContainer" containerID="f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" Jan 30 08:53:35 crc kubenswrapper[4758]: E0130 08:53:35.730420 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0\": container with ID starting with f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0 not found: ID does not exist" containerID="f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.730467 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0"} err="failed to get container status \"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0\": rpc error: code = NotFound desc = could not find container \"f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0\": container with ID starting with f620dc0be14a29c2abe3bfa8507b2c2a5c5684741e5d0430cbdf845ef84761f0 not found: ID does not exist" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.730495 4758 scope.go:117] "RemoveContainer" containerID="c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f" Jan 30 08:53:35 crc kubenswrapper[4758]: E0130 08:53:35.730911 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f\": container with ID starting with c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f not found: ID does not exist" containerID="c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.730936 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f"} err="failed to get container status \"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f\": rpc error: code = NotFound desc = could not find container \"c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f\": container with ID starting with c0e13a76fef0a7047cf0ae5d0c99011ff6fd7629af10b86bbaefaf2923dc673f not found: ID does not exist" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.730951 4758 scope.go:117] "RemoveContainer" containerID="bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307" Jan 30 08:53:35 crc kubenswrapper[4758]: E0130 08:53:35.731437 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307\": container with ID starting with bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307 not found: ID does not exist" containerID="bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.731478 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307"} err="failed to get container status \"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307\": rpc error: code = NotFound desc = could not find container \"bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307\": container with ID starting with bb83d8f7c5ee1f0aeb463e764b0edb2fa7668ec2a26f3d38a8d1e17a9bd7b307 not found: ID does not exist" Jan 30 08:53:35 crc kubenswrapper[4758]: I0130 08:53:35.780191 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" path="/var/lib/kubelet/pods/c423a7ce-2dda-478f-8579-641954eee4a9/volumes" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.500786 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.625773 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") pod \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.625857 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") pod \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.626061 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") pod \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\" (UID: \"0b4a1798-01a8-4e2f-8c93-ee4053777f75\") " Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.632070 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx" (OuterVolumeSpecName: "kube-api-access-qqbrx") pod "0b4a1798-01a8-4e2f-8c93-ee4053777f75" (UID: "0b4a1798-01a8-4e2f-8c93-ee4053777f75"). InnerVolumeSpecName "kube-api-access-qqbrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.636684 4758 generic.go:334] "Generic (PLEG): container finished" podID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerID="1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" exitCode=137 Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.636727 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b4a1798-01a8-4e2f-8c93-ee4053777f75","Type":"ContainerDied","Data":"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec"} Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.636751 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b4a1798-01a8-4e2f-8c93-ee4053777f75","Type":"ContainerDied","Data":"0facd9f58ad103e6d6bf1af4bf8a03e376d0d7dc402e1ef4b1234121e0c87d41"} Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.636767 4758 scope.go:117] "RemoveContainer" containerID="1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.637121 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.663139 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data" (OuterVolumeSpecName: "config-data") pod "0b4a1798-01a8-4e2f-8c93-ee4053777f75" (UID: "0b4a1798-01a8-4e2f-8c93-ee4053777f75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.666130 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b4a1798-01a8-4e2f-8c93-ee4053777f75" (UID: "0b4a1798-01a8-4e2f-8c93-ee4053777f75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.714570 4758 scope.go:117] "RemoveContainer" containerID="1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.722921 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec\": container with ID starting with 1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec not found: ID does not exist" containerID="1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.722990 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec"} err="failed to get container status \"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec\": rpc error: code = NotFound desc = could not find container \"1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec\": container with ID starting with 1a1e178c426fe3236be87955c8e4ebf132f7f356464c37396d50715a5ad05eec not found: ID does not exist" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.729054 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.729095 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqbrx\" (UniqueName: \"kubernetes.io/projected/0b4a1798-01a8-4e2f-8c93-ee4053777f75-kube-api-access-qqbrx\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.729107 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4a1798-01a8-4e2f-8c93-ee4053777f75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.972100 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.980632 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.998836 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999281 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999303 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999321 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon-log" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999327 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon-log" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999341 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999347 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999358 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="extract-utilities" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999364 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="extract-utilities" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999373 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999379 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999389 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="extract-content" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999395 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="extract-content" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999404 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999411 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: E0130 08:53:36.999427 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999436 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999608 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999625 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999635 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c423a7ce-2dda-478f-8579-641954eee4a9" containerName="registry-server" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999646 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon-log" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999657 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="365b123c-aa7f-464d-b659-78154f86d42f" containerName="horizon" Jan 30 08:53:36 crc kubenswrapper[4758]: I0130 08:53:36.999664 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.000346 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.004989 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.005284 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.006790 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.008349 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136139 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136214 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgvqz\" (UniqueName: \"kubernetes.io/projected/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-kube-api-access-zgvqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136380 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136491 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.136549 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.238245 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.238606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.238746 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgvqz\" (UniqueName: \"kubernetes.io/projected/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-kube-api-access-zgvqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.239295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.239808 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.243341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.244531 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.245096 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.247635 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.256255 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgvqz\" (UniqueName: \"kubernetes.io/projected/6c7b05a5-3faf-4e02-9bb5-f79a4745f073-kube-api-access-zgvqz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6c7b05a5-3faf-4e02-9bb5-f79a4745f073\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.329844 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.573130 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.647706 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") pod \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.647781 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") pod \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.647900 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") pod \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.647962 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") pod \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\" (UID: \"01bcaf88-537a-47cd-b50d-6c95e392f2a8\") " Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.648434 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs" (OuterVolumeSpecName: "logs") pod "01bcaf88-537a-47cd-b50d-6c95e392f2a8" (UID: "01bcaf88-537a-47cd-b50d-6c95e392f2a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.648740 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01bcaf88-537a-47cd-b50d-6c95e392f2a8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649240 4758 generic.go:334] "Generic (PLEG): container finished" podID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerID="20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" exitCode=137 Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649495 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerDied","Data":"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa"} Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"01bcaf88-537a-47cd-b50d-6c95e392f2a8","Type":"ContainerDied","Data":"2c2f12b4fc7394d7f62cb7f9a1ba9314155b34b91f9d418d8afd7c811d330122"} Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649572 4758 scope.go:117] "RemoveContainer" containerID="20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.649790 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.655597 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j" (OuterVolumeSpecName: "kube-api-access-s284j") pod "01bcaf88-537a-47cd-b50d-6c95e392f2a8" (UID: "01bcaf88-537a-47cd-b50d-6c95e392f2a8"). InnerVolumeSpecName "kube-api-access-s284j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.675100 4758 scope.go:117] "RemoveContainer" containerID="508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.681096 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data" (OuterVolumeSpecName: "config-data") pod "01bcaf88-537a-47cd-b50d-6c95e392f2a8" (UID: "01bcaf88-537a-47cd-b50d-6c95e392f2a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.682261 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01bcaf88-537a-47cd-b50d-6c95e392f2a8" (UID: "01bcaf88-537a-47cd-b50d-6c95e392f2a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.695280 4758 scope.go:117] "RemoveContainer" containerID="20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" Jan 30 08:53:37 crc kubenswrapper[4758]: E0130 08:53:37.695824 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa\": container with ID starting with 20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa not found: ID does not exist" containerID="20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.695870 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa"} err="failed to get container status \"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa\": rpc error: code = NotFound desc = could not find container \"20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa\": container with ID starting with 20ddc0b344682e55c910ff9de1721a07ba6e44d0a4255e3a00dff7cbd7b971aa not found: ID does not exist" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.695896 4758 scope.go:117] "RemoveContainer" containerID="508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" Jan 30 08:53:37 crc kubenswrapper[4758]: E0130 08:53:37.696404 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0\": container with ID starting with 508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0 not found: ID does not exist" containerID="508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.696440 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0"} err="failed to get container status \"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0\": rpc error: code = NotFound desc = could not find container \"508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0\": container with ID starting with 508e133d0247641e5699046eeb521052c96ff21ff16698be67b4fc4d79afdae0 not found: ID does not exist" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.751308 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s284j\" (UniqueName: \"kubernetes.io/projected/01bcaf88-537a-47cd-b50d-6c95e392f2a8-kube-api-access-s284j\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.751368 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.751382 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01bcaf88-537a-47cd-b50d-6c95e392f2a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.780205 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b4a1798-01a8-4e2f-8c93-ee4053777f75" path="/var/lib/kubelet/pods/0b4a1798-01a8-4e2f-8c93-ee4053777f75/volumes" Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.798241 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 08:53:37 crc kubenswrapper[4758]: W0130 08:53:37.802453 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7b05a5_3faf_4e02_9bb5_f79a4745f073.slice/crio-bf7d5aedbdc73c3e75353176340c0a278cc00b09921b43d00f86c5f18b4b50fe WatchSource:0}: Error finding container bf7d5aedbdc73c3e75353176340c0a278cc00b09921b43d00f86c5f18b4b50fe: Status 404 returned error can't find the container with id bf7d5aedbdc73c3e75353176340c0a278cc00b09921b43d00f86c5f18b4b50fe Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.981029 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:37 crc kubenswrapper[4758]: I0130 08:53:37.996455 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.004429 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:38 crc kubenswrapper[4758]: E0130 08:53:38.004842 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-metadata" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.004859 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-metadata" Jan 30 08:53:38 crc kubenswrapper[4758]: E0130 08:53:38.004881 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-log" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.004888 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-log" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.007346 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-log" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.007380 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" containerName="nova-metadata-metadata" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.009422 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.025129 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.025590 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.033491 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059346 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059447 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059573 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059624 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.059676 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161176 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161302 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161366 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161456 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.161495 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.162207 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.166756 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.167086 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.169810 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.186334 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") pod \"nova-metadata-0\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.326203 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.658201 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6c7b05a5-3faf-4e02-9bb5-f79a4745f073","Type":"ContainerStarted","Data":"c7b3e14079454eb9debc8b067ef17b72a65ff6eb442ad0116ed04074f155bafe"} Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.658541 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6c7b05a5-3faf-4e02-9bb5-f79a4745f073","Type":"ContainerStarted","Data":"bf7d5aedbdc73c3e75353176340c0a278cc00b09921b43d00f86c5f18b4b50fe"} Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.687208 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.68718925 podStartE2EDuration="2.68718925s" podCreationTimestamp="2026-01-30 08:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:38.674146242 +0000 UTC m=+1423.646457813" watchObservedRunningTime="2026-01-30 08:53:38.68718925 +0000 UTC m=+1423.659500801" Jan 30 08:53:38 crc kubenswrapper[4758]: I0130 08:53:38.805938 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.674093 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerStarted","Data":"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01"} Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.674650 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerStarted","Data":"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de"} Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.674668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerStarted","Data":"3f9633af4e5e2d3e1abc96a29475ac2109ec7d4b547a737c821f935b6a92090b"} Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.707717 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.707687535 podStartE2EDuration="2.707687535s" podCreationTimestamp="2026-01-30 08:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:39.695328139 +0000 UTC m=+1424.667639710" watchObservedRunningTime="2026-01-30 08:53:39.707687535 +0000 UTC m=+1424.679999086" Jan 30 08:53:39 crc kubenswrapper[4758]: I0130 08:53:39.780709 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01bcaf88-537a-47cd-b50d-6c95e392f2a8" path="/var/lib/kubelet/pods/01bcaf88-537a-47cd-b50d-6c95e392f2a8/volumes" Jan 30 08:53:42 crc kubenswrapper[4758]: I0130 08:53:42.330169 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.326333 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.326620 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.929444 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.929523 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.930190 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.930214 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.935962 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:53:43 crc kubenswrapper[4758]: I0130 08:53:43.940495 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.169855 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.171669 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.199401 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.290946 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.291066 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.291220 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.291260 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.291301 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393532 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393589 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393630 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393674 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.393725 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.394436 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.394437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.394968 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.395254 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.418302 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") pod \"dnsmasq-dns-5459cb87c-dlx4d\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:44 crc kubenswrapper[4758]: I0130 08:53:44.509344 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:45 crc kubenswrapper[4758]: I0130 08:53:45.040878 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:53:45 crc kubenswrapper[4758]: I0130 08:53:45.729796 4758 generic.go:334] "Generic (PLEG): container finished" podID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerID="c6c147de1a4a6872b323ab195d60eaec8350ddff6728805d455fbf3cd9db1508" exitCode=0 Jan 30 08:53:45 crc kubenswrapper[4758]: I0130 08:53:45.729899 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerDied","Data":"c6c147de1a4a6872b323ab195d60eaec8350ddff6728805d455fbf3cd9db1508"} Jan 30 08:53:45 crc kubenswrapper[4758]: I0130 08:53:45.730247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerStarted","Data":"ee6038add08fd771bda22b40e423fff940fc9b5d99566b28b9e2c028c0c8fa85"} Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.740612 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerStarted","Data":"6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34"} Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.741075 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.767165 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" podStartSLOduration=2.7671404649999998 podStartE2EDuration="2.767140465s" podCreationTimestamp="2026-01-30 08:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:46.759013471 +0000 UTC m=+1431.731325042" watchObservedRunningTime="2026-01-30 08:53:46.767140465 +0000 UTC m=+1431.739452036" Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832216 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832554 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-central-agent" containerID="cri-o://7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832653 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-notification-agent" containerID="cri-o://3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832658 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="proxy-httpd" containerID="cri-o://7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.832671 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="sg-core" containerID="cri-o://7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.881992 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.882352 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" containerID="cri-o://a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" gracePeriod=30 Jan 30 08:53:46 crc kubenswrapper[4758]: I0130 08:53:46.882371 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" containerID="cri-o://f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" gracePeriod=30 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.330825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.356567 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.751065 4758 generic.go:334] "Generic (PLEG): container finished" podID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerID="a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" exitCode=143 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.751100 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerDied","Data":"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de"} Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754291 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerID="7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" exitCode=0 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754331 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerID="7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" exitCode=2 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754340 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerID="7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" exitCode=0 Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754372 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6"} Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c"} Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.754414 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9"} Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.778436 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.971394 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.972715 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.978405 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.978727 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 08:53:47 crc kubenswrapper[4758]: I0130 08:53:47.990859 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.078095 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.078390 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.078529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.078658 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.182755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.182974 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.185729 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.185881 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.196792 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.197087 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.203732 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.204341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") pod \"nova-cell1-cell-mapping-pmsnh\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.304258 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.326452 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.326505 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:53:48 crc kubenswrapper[4758]: I0130 08:53:48.879445 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.336360 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.346306 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.502859 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.619747 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.619859 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.619947 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.619976 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.620600 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.621101 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.622846 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.622901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.622971 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.623084 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.623969 4758 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.623996 4758 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a23235c6-1a6b-42cb-a434-08b7e3555915-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.647280 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb" (OuterVolumeSpecName: "kube-api-access-hhrnb") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "kube-api-access-hhrnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.647613 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts" (OuterVolumeSpecName: "scripts") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.716217 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.725342 4758 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.725372 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhrnb\" (UniqueName: \"kubernetes.io/projected/a23235c6-1a6b-42cb-a434-08b7e3555915-kube-api-access-hhrnb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.725384 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.793329 4758 generic.go:334] "Generic (PLEG): container finished" podID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerID="3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" exitCode=0 Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.793431 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.796611 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814258 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pmsnh" event={"ID":"aaef08b6-5771-4e63-90e2-f3eb803993ad","Type":"ContainerStarted","Data":"fe37a0352772fb36e158de913042f5d07970aede7274a69f794e11be3129cf1b"} Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pmsnh" event={"ID":"aaef08b6-5771-4e63-90e2-f3eb803993ad","Type":"ContainerStarted","Data":"3e773042da1511a1e81355c6d888b2d542a22fa7435de49aab4644624967fc78"} Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9"} Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814382 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a23235c6-1a6b-42cb-a434-08b7e3555915","Type":"ContainerDied","Data":"b5ca816011f4f5a4791f16842e9d93e8577093c69e06c21a6dbcef6e6e551521"} Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.814405 4758 scope.go:117] "RemoveContainer" containerID="7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.818484 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pmsnh" podStartSLOduration=2.8184651130000002 podStartE2EDuration="2.818465113s" podCreationTimestamp="2026-01-30 08:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:49.802846324 +0000 UTC m=+1434.775157885" watchObservedRunningTime="2026-01-30 08:53:49.818465113 +0000 UTC m=+1434.790776664" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.836313 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.861237 4758 scope.go:117] "RemoveContainer" containerID="7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.882176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.892545 4758 scope.go:117] "RemoveContainer" containerID="3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.934645 4758 scope.go:117] "RemoveContainer" containerID="7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.939141 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data" (OuterVolumeSpecName: "config-data") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.939241 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") pod \"a23235c6-1a6b-42cb-a434-08b7e3555915\" (UID: \"a23235c6-1a6b-42cb-a434-08b7e3555915\") " Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.939685 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:49 crc kubenswrapper[4758]: W0130 08:53:49.939783 4758 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a23235c6-1a6b-42cb-a434-08b7e3555915/volumes/kubernetes.io~secret/config-data Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.939800 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data" (OuterVolumeSpecName: "config-data") pod "a23235c6-1a6b-42cb-a434-08b7e3555915" (UID: "a23235c6-1a6b-42cb-a434-08b7e3555915"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.963232 4758 scope.go:117] "RemoveContainer" containerID="7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" Jan 30 08:53:49 crc kubenswrapper[4758]: E0130 08:53:49.964585 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6\": container with ID starting with 7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6 not found: ID does not exist" containerID="7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.964641 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6"} err="failed to get container status \"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6\": rpc error: code = NotFound desc = could not find container \"7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6\": container with ID starting with 7ba22299f168eb78fe87e8d53a428449b16f3eeecbf7e8c19f86a940b3d067e6 not found: ID does not exist" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.964674 4758 scope.go:117] "RemoveContainer" containerID="7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" Jan 30 08:53:49 crc kubenswrapper[4758]: E0130 08:53:49.968060 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c\": container with ID starting with 7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c not found: ID does not exist" containerID="7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.968101 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c"} err="failed to get container status \"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c\": rpc error: code = NotFound desc = could not find container \"7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c\": container with ID starting with 7e4bd70d0e609eeb258cb6de571bb04eeb3d3d64c3dcfdd5b56422032229306c not found: ID does not exist" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.968135 4758 scope.go:117] "RemoveContainer" containerID="3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" Jan 30 08:53:49 crc kubenswrapper[4758]: E0130 08:53:49.969129 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9\": container with ID starting with 3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9 not found: ID does not exist" containerID="3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.969174 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9"} err="failed to get container status \"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9\": rpc error: code = NotFound desc = could not find container \"3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9\": container with ID starting with 3d9791a1a39c42b46694015be2467d7c97f0323d1a170ad1330010bf9b651ed9 not found: ID does not exist" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.969203 4758 scope.go:117] "RemoveContainer" containerID="7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" Jan 30 08:53:49 crc kubenswrapper[4758]: E0130 08:53:49.969533 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9\": container with ID starting with 7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9 not found: ID does not exist" containerID="7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9" Jan 30 08:53:49 crc kubenswrapper[4758]: I0130 08:53:49.969561 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9"} err="failed to get container status \"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9\": rpc error: code = NotFound desc = could not find container \"7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9\": container with ID starting with 7dc691938b989dd48a50d358ba0a9ff65b06e1a0e6d00dc4aac17ff6422354a9 not found: ID does not exist" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.041511 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a23235c6-1a6b-42cb-a434-08b7e3555915-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.135815 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.155221 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.176873 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.177388 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-central-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177406 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-central-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.177421 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="proxy-httpd" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177427 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="proxy-httpd" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.177449 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="sg-core" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177455 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="sg-core" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.177470 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-notification-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177475 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-notification-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177660 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-central-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177697 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="proxy-httpd" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177711 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="sg-core" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.177722 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" containerName="ceilometer-notification-agent" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.179992 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.186162 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.186491 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.186617 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.204532 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245662 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r66qc\" (UniqueName: \"kubernetes.io/projected/dc56c5be-c70d-44a7-8914-cf2e598f3333-kube-api-access-r66qc\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245727 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245756 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-scripts\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245787 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-config-data\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.245866 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-log-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.246030 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.246141 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.246414 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-run-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348238 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-log-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348321 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348376 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348432 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-run-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348507 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r66qc\" (UniqueName: \"kubernetes.io/projected/dc56c5be-c70d-44a7-8914-cf2e598f3333-kube-api-access-r66qc\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348539 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348561 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-scripts\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.348587 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-config-data\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.349617 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-run-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.349888 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc56c5be-c70d-44a7-8914-cf2e598f3333-log-httpd\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.357631 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.360824 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.365179 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-config-data\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.367458 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-scripts\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.367499 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc56c5be-c70d-44a7-8914-cf2e598f3333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.382897 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r66qc\" (UniqueName: \"kubernetes.io/projected/dc56c5be-c70d-44a7-8914-cf2e598f3333-kube-api-access-r66qc\") pod \"ceilometer-0\" (UID: \"dc56c5be-c70d-44a7-8914-cf2e598f3333\") " pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.564757 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.724992 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.759552 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") pod \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.759656 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") pod \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.759769 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") pod \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.759831 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") pod \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\" (UID: \"498895d5-d6fe-4ea3-ae4b-c610c53b89c3\") " Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.779337 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm" (OuterVolumeSpecName: "kube-api-access-mzkbm") pod "498895d5-d6fe-4ea3-ae4b-c610c53b89c3" (UID: "498895d5-d6fe-4ea3-ae4b-c610c53b89c3"). InnerVolumeSpecName "kube-api-access-mzkbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.779541 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs" (OuterVolumeSpecName: "logs") pod "498895d5-d6fe-4ea3-ae4b-c610c53b89c3" (UID: "498895d5-d6fe-4ea3-ae4b-c610c53b89c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.811667 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data" (OuterVolumeSpecName: "config-data") pod "498895d5-d6fe-4ea3-ae4b-c610c53b89c3" (UID: "498895d5-d6fe-4ea3-ae4b-c610c53b89c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.821724 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498895d5-d6fe-4ea3-ae4b-c610c53b89c3" (UID: "498895d5-d6fe-4ea3-ae4b-c610c53b89c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.845641 4758 generic.go:334] "Generic (PLEG): container finished" podID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerID="f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" exitCode=0 Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.845749 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.846586 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerDied","Data":"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc"} Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.846618 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"498895d5-d6fe-4ea3-ae4b-c610c53b89c3","Type":"ContainerDied","Data":"73e4290d9c9ecf7bf09bd3149f6a065ae76b3173eaddcd9c2fd875f062240283"} Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.846636 4758 scope.go:117] "RemoveContainer" containerID="f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.864378 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzkbm\" (UniqueName: \"kubernetes.io/projected/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-kube-api-access-mzkbm\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.864408 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.864419 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.864427 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498895d5-d6fe-4ea3-ae4b-c610c53b89c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.881249 4758 scope.go:117] "RemoveContainer" containerID="a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.909855 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.920502 4758 scope.go:117] "RemoveContainer" containerID="f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.921251 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc\": container with ID starting with f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc not found: ID does not exist" containerID="f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.921293 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc"} err="failed to get container status \"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc\": rpc error: code = NotFound desc = could not find container \"f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc\": container with ID starting with f12fcc50ef09043a352af2b39fed9e188c7a555eeb864630ce5dd07b55fe85cc not found: ID does not exist" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.921319 4758 scope.go:117] "RemoveContainer" containerID="a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.935021 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de\": container with ID starting with a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de not found: ID does not exist" containerID="a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.935074 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de"} err="failed to get container status \"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de\": rpc error: code = NotFound desc = could not find container \"a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de\": container with ID starting with a5279d3f070e42e02a5adeec6b58dd449c8ed322e77a9a890cd8de84296828de not found: ID does not exist" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.943425 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.972096 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.978468 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.978504 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" Jan 30 08:53:50 crc kubenswrapper[4758]: E0130 08:53:50.978529 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.978537 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.978721 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-log" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.978734 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" containerName="nova-api-api" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.979811 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.983966 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.985625 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 08:53:50 crc kubenswrapper[4758]: I0130 08:53:50.990264 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.020367 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069264 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069414 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069457 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069520 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.069623 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.129337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.172439 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173325 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173617 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173862 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173906 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.173988 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.174998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.180750 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.183971 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.188707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.205569 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.213685 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") pod \"nova-api-0\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.317479 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.780516 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498895d5-d6fe-4ea3-ae4b-c610c53b89c3" path="/var/lib/kubelet/pods/498895d5-d6fe-4ea3-ae4b-c610c53b89c3/volumes" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.781573 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23235c6-1a6b-42cb-a434-08b7e3555915" path="/var/lib/kubelet/pods/a23235c6-1a6b-42cb-a434-08b7e3555915/volumes" Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.855622 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"eba56efc1d0c28b01b9d9446966eeb05317a23c38164905823ab78573a752d66"} Jan 30 08:53:51 crc kubenswrapper[4758]: I0130 08:53:51.954121 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.868314 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"8ce990c1f83055815056fb655ee6a37afaf0dbac152edb3b1fc3adc7454462ec"} Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.870583 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerStarted","Data":"2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48"} Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.870689 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerStarted","Data":"63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2"} Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.870761 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerStarted","Data":"f554ad2db5ef82928f9d27ccb5db688bfe0dd035e3eb8df1d1564fab173bc5d5"} Jan 30 08:53:52 crc kubenswrapper[4758]: I0130 08:53:52.891432 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8914135869999997 podStartE2EDuration="2.891413587s" podCreationTimestamp="2026-01-30 08:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:53:52.88957582 +0000 UTC m=+1437.861887381" watchObservedRunningTime="2026-01-30 08:53:52.891413587 +0000 UTC m=+1437.863725138" Jan 30 08:53:53 crc kubenswrapper[4758]: I0130 08:53:53.890435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"33872e0781c7bc6b3e73e245f915b77e2966fe7ed0f85266a4c1d5bc8c30f441"} Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.512277 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.617186 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.619185 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="dnsmasq-dns" containerID="cri-o://dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995" gracePeriod=10 Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.901746 4758 generic.go:334] "Generic (PLEG): container finished" podID="88205fec-5592-41f1-a351-daf34b97add7" containerID="dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995" exitCode=0 Jan 30 08:53:54 crc kubenswrapper[4758]: I0130 08:53:54.901791 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerDied","Data":"dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995"} Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.719682 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.802788 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.803346 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.803637 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.803757 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.804780 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") pod \"88205fec-5592-41f1-a351-daf34b97add7\" (UID: \"88205fec-5592-41f1-a351-daf34b97add7\") " Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.835496 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp" (OuterVolumeSpecName: "kube-api-access-gwttp") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "kube-api-access-gwttp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.878594 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config" (OuterVolumeSpecName: "config") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.899737 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.905375 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.913477 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.913515 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwttp\" (UniqueName: \"kubernetes.io/projected/88205fec-5592-41f1-a351-daf34b97add7-kube-api-access-gwttp\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.913532 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.913544 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.926078 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"55c5b8766354c39b21c6bf27726094e571c7c205d6b744a36a334ccbe8bfce0f"} Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.932552 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" event={"ID":"88205fec-5592-41f1-a351-daf34b97add7","Type":"ContainerDied","Data":"44f8b069d0be92a8bc294a5f42966c64f273339f26c92d95f7dfd0fd9149bca0"} Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.933265 4758 scope.go:117] "RemoveContainer" containerID="dce4b9b027663af6d3ed749e75b2800742d3649e7fa7c5291abccf826e64b995" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.932856 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6ccb6797-l4vl5" Jan 30 08:53:55 crc kubenswrapper[4758]: I0130 08:53:55.996016 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88205fec-5592-41f1-a351-daf34b97add7" (UID: "88205fec-5592-41f1-a351-daf34b97add7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:53:56 crc kubenswrapper[4758]: I0130 08:53:56.008582 4758 scope.go:117] "RemoveContainer" containerID="a9b45fc847ddf846195914d0ffbe6a09a108519311beafcfe30dbeedd9e7ad3b" Jan 30 08:53:56 crc kubenswrapper[4758]: I0130 08:53:56.020246 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88205fec-5592-41f1-a351-daf34b97add7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:56 crc kubenswrapper[4758]: I0130 08:53:56.271842 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:53:56 crc kubenswrapper[4758]: I0130 08:53:56.281291 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6ccb6797-l4vl5"] Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.778031 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88205fec-5592-41f1-a351-daf34b97add7" path="/var/lib/kubelet/pods/88205fec-5592-41f1-a351-daf34b97add7/volumes" Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.953245 4758 generic.go:334] "Generic (PLEG): container finished" podID="aaef08b6-5771-4e63-90e2-f3eb803993ad" containerID="fe37a0352772fb36e158de913042f5d07970aede7274a69f794e11be3129cf1b" exitCode=0 Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.953316 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pmsnh" event={"ID":"aaef08b6-5771-4e63-90e2-f3eb803993ad","Type":"ContainerDied","Data":"fe37a0352772fb36e158de913042f5d07970aede7274a69f794e11be3129cf1b"} Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.958759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc56c5be-c70d-44a7-8914-cf2e598f3333","Type":"ContainerStarted","Data":"c9dc5d25c811caadd45d847e47abe0897febb24c5d995725ae457fcb51d63fe3"} Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.959015 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 08:53:57 crc kubenswrapper[4758]: I0130 08:53:57.996818 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.110274981 podStartE2EDuration="7.996800246s" podCreationTimestamp="2026-01-30 08:53:50 +0000 UTC" firstStartedPulling="2026-01-30 08:53:51.152566869 +0000 UTC m=+1436.124878420" lastFinishedPulling="2026-01-30 08:53:57.039086204 +0000 UTC m=+1442.011403685" observedRunningTime="2026-01-30 08:53:57.988880677 +0000 UTC m=+1442.961192228" watchObservedRunningTime="2026-01-30 08:53:57.996800246 +0000 UTC m=+1442.969111797" Jan 30 08:53:58 crc kubenswrapper[4758]: I0130 08:53:58.333812 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:53:58 crc kubenswrapper[4758]: I0130 08:53:58.335386 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:53:58 crc kubenswrapper[4758]: I0130 08:53:58.342210 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:53:58 crc kubenswrapper[4758]: I0130 08:53:58.978380 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.412340 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.487990 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") pod \"aaef08b6-5771-4e63-90e2-f3eb803993ad\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.488163 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") pod \"aaef08b6-5771-4e63-90e2-f3eb803993ad\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.488206 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") pod \"aaef08b6-5771-4e63-90e2-f3eb803993ad\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.488234 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") pod \"aaef08b6-5771-4e63-90e2-f3eb803993ad\" (UID: \"aaef08b6-5771-4e63-90e2-f3eb803993ad\") " Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.496176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts" (OuterVolumeSpecName: "scripts") pod "aaef08b6-5771-4e63-90e2-f3eb803993ad" (UID: "aaef08b6-5771-4e63-90e2-f3eb803993ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.497576 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv" (OuterVolumeSpecName: "kube-api-access-dqvlv") pod "aaef08b6-5771-4e63-90e2-f3eb803993ad" (UID: "aaef08b6-5771-4e63-90e2-f3eb803993ad"). InnerVolumeSpecName "kube-api-access-dqvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.523789 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data" (OuterVolumeSpecName: "config-data") pod "aaef08b6-5771-4e63-90e2-f3eb803993ad" (UID: "aaef08b6-5771-4e63-90e2-f3eb803993ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.536225 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaef08b6-5771-4e63-90e2-f3eb803993ad" (UID: "aaef08b6-5771-4e63-90e2-f3eb803993ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.590996 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.591046 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.591058 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqvlv\" (UniqueName: \"kubernetes.io/projected/aaef08b6-5771-4e63-90e2-f3eb803993ad-kube-api-access-dqvlv\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.591069 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaef08b6-5771-4e63-90e2-f3eb803993ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.978726 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pmsnh" Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.980677 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pmsnh" event={"ID":"aaef08b6-5771-4e63-90e2-f3eb803993ad","Type":"ContainerDied","Data":"3e773042da1511a1e81355c6d888b2d542a22fa7435de49aab4644624967fc78"} Jan 30 08:53:59 crc kubenswrapper[4758]: I0130 08:53:59.980829 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e773042da1511a1e81355c6d888b2d542a22fa7435de49aab4644624967fc78" Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.092089 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.092624 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" containerID="cri-o://d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" gracePeriod=30 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.155073 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.155319 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-log" containerID="cri-o://63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2" gracePeriod=30 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.155456 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-api" containerID="cri-o://2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48" gracePeriod=30 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.214402 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.988921 4758 generic.go:334] "Generic (PLEG): container finished" podID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerID="2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48" exitCode=0 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.988947 4758 generic.go:334] "Generic (PLEG): container finished" podID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerID="63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2" exitCode=143 Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.989803 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerDied","Data":"2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48"} Jan 30 08:54:00 crc kubenswrapper[4758]: I0130 08:54:00.989833 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerDied","Data":"63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2"} Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.078723 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242641 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242706 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242756 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242959 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.242984 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.243121 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") pod \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\" (UID: \"6605f787-f5cc-41f5-bae7-0e1f2006fda5\") " Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.243882 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs" (OuterVolumeSpecName: "logs") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.250191 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2" (OuterVolumeSpecName: "kube-api-access-c4rn2") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "kube-api-access-c4rn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.277294 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data" (OuterVolumeSpecName: "config-data") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.306380 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.308622 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.325747 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6605f787-f5cc-41f5-bae7-0e1f2006fda5" (UID: "6605f787-f5cc-41f5-bae7-0e1f2006fda5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345697 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345735 4758 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345748 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4rn2\" (UniqueName: \"kubernetes.io/projected/6605f787-f5cc-41f5-bae7-0e1f2006fda5-kube-api-access-c4rn2\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345762 4758 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345772 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6605f787-f5cc-41f5-bae7-0e1f2006fda5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: I0130 08:54:01.345781 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6605f787-f5cc-41f5-bae7-0e1f2006fda5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:01 crc kubenswrapper[4758]: E0130 08:54:01.894168 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:54:01 crc kubenswrapper[4758]: E0130 08:54:01.895466 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:54:01 crc kubenswrapper[4758]: E0130 08:54:01.896519 4758 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 08:54:01 crc kubenswrapper[4758]: E0130 08:54:01.896565 4758 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.000779 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6605f787-f5cc-41f5-bae7-0e1f2006fda5","Type":"ContainerDied","Data":"f554ad2db5ef82928f9d27ccb5db688bfe0dd035e3eb8df1d1564fab173bc5d5"} Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.000855 4758 scope.go:117] "RemoveContainer" containerID="2bca0cb3dd5686f7baa8259f16b57055e85a1ec675df5d7971b68df4de192b48" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.000801 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.001064 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" containerID="cri-o://0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" gracePeriod=30 Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.000851 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" containerID="cri-o://c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" gracePeriod=30 Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.038241 4758 scope.go:117] "RemoveContainer" containerID="63a36fb4f1204bc3ff16b813edfb2539b826c4c6889ef83583df5d90e6712ca2" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.042244 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.054363 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085106 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085588 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaef08b6-5771-4e63-90e2-f3eb803993ad" containerName="nova-manage" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085608 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaef08b6-5771-4e63-90e2-f3eb803993ad" containerName="nova-manage" Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085636 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="init" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085644 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="init" Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085664 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-log" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085673 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-log" Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085689 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="dnsmasq-dns" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085696 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="dnsmasq-dns" Jan 30 08:54:02 crc kubenswrapper[4758]: E0130 08:54:02.085720 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-api" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085729 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-api" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.085980 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-log" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.086006 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="88205fec-5592-41f1-a351-daf34b97add7" containerName="dnsmasq-dns" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.086026 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" containerName="nova-api-api" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.086071 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaef08b6-5771-4e63-90e2-f3eb803993ad" containerName="nova-manage" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.087140 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.095122 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.095355 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.099291 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.154389 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161372 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-config-data\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161500 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161630 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161769 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dr4f\" (UniqueName: \"kubernetes.io/projected/fd2d2fe7-5dac-4f3b-80f6-650712925495-kube-api-access-2dr4f\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161832 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.161911 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd2d2fe7-5dac-4f3b-80f6-650712925495-logs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264055 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264134 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dr4f\" (UniqueName: \"kubernetes.io/projected/fd2d2fe7-5dac-4f3b-80f6-650712925495-kube-api-access-2dr4f\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264208 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd2d2fe7-5dac-4f3b-80f6-650712925495-logs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264295 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-config-data\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.264324 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.265182 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd2d2fe7-5dac-4f3b-80f6-650712925495-logs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.269991 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-config-data\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.270961 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.273293 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.282497 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd2d2fe7-5dac-4f3b-80f6-650712925495-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.283427 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dr4f\" (UniqueName: \"kubernetes.io/projected/fd2d2fe7-5dac-4f3b-80f6-650712925495-kube-api-access-2dr4f\") pod \"nova-api-0\" (UID: \"fd2d2fe7-5dac-4f3b-80f6-650712925495\") " pod="openstack/nova-api-0" Jan 30 08:54:02 crc kubenswrapper[4758]: I0130 08:54:02.552480 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.013539 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.017470 4758 generic.go:334] "Generic (PLEG): container finished" podID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerID="c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" exitCode=143 Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.017520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerDied","Data":"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de"} Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.019766 4758 generic.go:334] "Generic (PLEG): container finished" podID="015d0689-8130-4f19-bb79-866766c02c63" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" exitCode=0 Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.019798 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"015d0689-8130-4f19-bb79-866766c02c63","Type":"ContainerDied","Data":"d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be"} Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.019818 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"015d0689-8130-4f19-bb79-866766c02c63","Type":"ContainerDied","Data":"7df77a789ad68a7f771f50962b45656f81dbca922fc56cf93ed7fd78a2640d9a"} Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.019831 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7df77a789ad68a7f771f50962b45656f81dbca922fc56cf93ed7fd78a2640d9a" Jan 30 08:54:03 crc kubenswrapper[4758]: W0130 08:54:03.027970 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd2d2fe7_5dac_4f3b_80f6_650712925495.slice/crio-8f74332b07996cff995e0f4eb922bf4fa08d3d7e27a7800fe6c3061a6c53a933 WatchSource:0}: Error finding container 8f74332b07996cff995e0f4eb922bf4fa08d3d7e27a7800fe6c3061a6c53a933: Status 404 returned error can't find the container with id 8f74332b07996cff995e0f4eb922bf4fa08d3d7e27a7800fe6c3061a6c53a933 Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.103228 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.191386 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") pod \"015d0689-8130-4f19-bb79-866766c02c63\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.191527 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") pod \"015d0689-8130-4f19-bb79-866766c02c63\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.191620 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") pod \"015d0689-8130-4f19-bb79-866766c02c63\" (UID: \"015d0689-8130-4f19-bb79-866766c02c63\") " Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.198336 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w" (OuterVolumeSpecName: "kube-api-access-jq64w") pod "015d0689-8130-4f19-bb79-866766c02c63" (UID: "015d0689-8130-4f19-bb79-866766c02c63"). InnerVolumeSpecName "kube-api-access-jq64w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.229885 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data" (OuterVolumeSpecName: "config-data") pod "015d0689-8130-4f19-bb79-866766c02c63" (UID: "015d0689-8130-4f19-bb79-866766c02c63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.236592 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "015d0689-8130-4f19-bb79-866766c02c63" (UID: "015d0689-8130-4f19-bb79-866766c02c63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.294256 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.294519 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015d0689-8130-4f19-bb79-866766c02c63-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.294602 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq64w\" (UniqueName: \"kubernetes.io/projected/015d0689-8130-4f19-bb79-866766c02c63-kube-api-access-jq64w\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:03 crc kubenswrapper[4758]: I0130 08:54:03.780408 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6605f787-f5cc-41f5-bae7-0e1f2006fda5" path="/var/lib/kubelet/pods/6605f787-f5cc-41f5-bae7-0e1f2006fda5/volumes" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.030595 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.031522 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd2d2fe7-5dac-4f3b-80f6-650712925495","Type":"ContainerStarted","Data":"c19d0359d077740e2a60423426de85f1e9e4215d4e0192236ad6bd8ce44d37ab"} Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.031577 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd2d2fe7-5dac-4f3b-80f6-650712925495","Type":"ContainerStarted","Data":"e30080425fe17c9c7171805e7cf3a30d7971aea5633984ff5581cd7e852e852e"} Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.031589 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd2d2fe7-5dac-4f3b-80f6-650712925495","Type":"ContainerStarted","Data":"8f74332b07996cff995e0f4eb922bf4fa08d3d7e27a7800fe6c3061a6c53a933"} Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.075718 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.075694168 podStartE2EDuration="2.075694168s" podCreationTimestamp="2026-01-30 08:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:54:04.057649864 +0000 UTC m=+1449.029961415" watchObservedRunningTime="2026-01-30 08:54:04.075694168 +0000 UTC m=+1449.048005719" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.089499 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.109309 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.122665 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: E0130 08:54:04.123162 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.123185 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.123410 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="015d0689-8130-4f19-bb79-866766c02c63" containerName="nova-scheduler-scheduler" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.124161 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.131425 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.135163 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.223437 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9fc9\" (UniqueName: \"kubernetes.io/projected/4697d282-598e-4faa-ae13-6ba6d3747bf0-kube-api-access-b9fc9\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.223600 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-config-data\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.223652 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.324926 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-config-data\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.325103 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.325186 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9fc9\" (UniqueName: \"kubernetes.io/projected/4697d282-598e-4faa-ae13-6ba6d3747bf0-kube-api-access-b9fc9\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.332100 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-config-data\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.332153 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697d282-598e-4faa-ae13-6ba6d3747bf0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.344582 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9fc9\" (UniqueName: \"kubernetes.io/projected/4697d282-598e-4faa-ae13-6ba6d3747bf0-kube-api-access-b9fc9\") pod \"nova-scheduler-0\" (UID: \"4697d282-598e-4faa-ae13-6ba6d3747bf0\") " pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.441866 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 08:54:04 crc kubenswrapper[4758]: I0130 08:54:04.913755 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 08:54:04 crc kubenswrapper[4758]: W0130 08:54:04.916497 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4697d282_598e_4faa_ae13_6ba6d3747bf0.slice/crio-21b50cab33aed98c77936b67b448722f0ab06ad8426dfa4f790aa794e05db43d WatchSource:0}: Error finding container 21b50cab33aed98c77936b67b448722f0ab06ad8426dfa4f790aa794e05db43d: Status 404 returned error can't find the container with id 21b50cab33aed98c77936b67b448722f0ab06ad8426dfa4f790aa794e05db43d Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.040643 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4697d282-598e-4faa-ae13-6ba6d3747bf0","Type":"ContainerStarted","Data":"21b50cab33aed98c77936b67b448722f0ab06ad8426dfa4f790aa794e05db43d"} Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.190349 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:41606->10.217.0.202:8775: read: connection reset by peer" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.190408 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:41620->10.217.0.202:8775: read: connection reset by peer" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.603892 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.756861 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.756966 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.757019 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.757115 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.757174 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") pod \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\" (UID: \"f43adcc0-5ee5-4f9e-be48-03a532f349d5\") " Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.758153 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs" (OuterVolumeSpecName: "logs") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.765244 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4" (OuterVolumeSpecName: "kube-api-access-mfbj4") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "kube-api-access-mfbj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.782819 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015d0689-8130-4f19-bb79-866766c02c63" path="/var/lib/kubelet/pods/015d0689-8130-4f19-bb79-866766c02c63/volumes" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.802148 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.814229 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data" (OuterVolumeSpecName: "config-data") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.829348 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f43adcc0-5ee5-4f9e-be48-03a532f349d5" (UID: "f43adcc0-5ee5-4f9e-be48-03a532f349d5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859577 4758 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f43adcc0-5ee5-4f9e-be48-03a532f349d5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859614 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859625 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfbj4\" (UniqueName: \"kubernetes.io/projected/f43adcc0-5ee5-4f9e-be48-03a532f349d5-kube-api-access-mfbj4\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859639 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:05 crc kubenswrapper[4758]: I0130 08:54:05.859647 4758 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43adcc0-5ee5-4f9e-be48-03a532f349d5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.055386 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4697d282-598e-4faa-ae13-6ba6d3747bf0","Type":"ContainerStarted","Data":"9f85b83ab2936da665dbbbbdc245073acfd0557b87921496b93ff6472a4059ec"} Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060581 4758 generic.go:334] "Generic (PLEG): container finished" podID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerID="0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" exitCode=0 Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerDied","Data":"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01"} Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060690 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060708 4758 scope.go:117] "RemoveContainer" containerID="0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.060669 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f43adcc0-5ee5-4f9e-be48-03a532f349d5","Type":"ContainerDied","Data":"3f9633af4e5e2d3e1abc96a29475ac2109ec7d4b547a737c821f935b6a92090b"} Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.075240 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.075219522 podStartE2EDuration="2.075219522s" podCreationTimestamp="2026-01-30 08:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:54:06.073331823 +0000 UTC m=+1451.045643384" watchObservedRunningTime="2026-01-30 08:54:06.075219522 +0000 UTC m=+1451.047531073" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.118252 4758 scope.go:117] "RemoveContainer" containerID="c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.123822 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.168107 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.183218 4758 scope.go:117] "RemoveContainer" containerID="0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" Jan 30 08:54:06 crc kubenswrapper[4758]: E0130 08:54:06.187176 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01\": container with ID starting with 0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01 not found: ID does not exist" containerID="0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.187221 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01"} err="failed to get container status \"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01\": rpc error: code = NotFound desc = could not find container \"0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01\": container with ID starting with 0422d7fe62ce5c19dbcd1edd6525e6d0dfd52cafdb1fb5d77beb80624afa9f01 not found: ID does not exist" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.187252 4758 scope.go:117] "RemoveContainer" containerID="c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.189877 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:06 crc kubenswrapper[4758]: E0130 08:54:06.190363 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.190390 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" Jan 30 08:54:06 crc kubenswrapper[4758]: E0130 08:54:06.190413 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.190422 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.190846 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-metadata" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.190873 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" containerName="nova-metadata-log" Jan 30 08:54:06 crc kubenswrapper[4758]: E0130 08:54:06.194191 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de\": container with ID starting with c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de not found: ID does not exist" containerID="c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.194237 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de"} err="failed to get container status \"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de\": rpc error: code = NotFound desc = could not find container \"c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de\": container with ID starting with c93da2f6abab7737afb45b672c34330de5db220e15419dcf4187c1f9b06378de not found: ID does not exist" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.199394 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.207515 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.207770 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.217135 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.374692 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.374747 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4213c10-dde9-4a4d-9af9-304dd08f755c-logs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.374938 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-config-data\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.375226 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.375258 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28t5\" (UniqueName: \"kubernetes.io/projected/e4213c10-dde9-4a4d-9af9-304dd08f755c-kube-api-access-h28t5\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477130 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477183 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h28t5\" (UniqueName: \"kubernetes.io/projected/e4213c10-dde9-4a4d-9af9-304dd08f755c-kube-api-access-h28t5\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477254 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477279 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4213c10-dde9-4a4d-9af9-304dd08f755c-logs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-config-data\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.477873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4213c10-dde9-4a4d-9af9-304dd08f755c-logs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.480848 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-config-data\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.481596 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.493397 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4213c10-dde9-4a4d-9af9-304dd08f755c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.494620 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h28t5\" (UniqueName: \"kubernetes.io/projected/e4213c10-dde9-4a4d-9af9-304dd08f755c-kube-api-access-h28t5\") pod \"nova-metadata-0\" (UID: \"e4213c10-dde9-4a4d-9af9-304dd08f755c\") " pod="openstack/nova-metadata-0" Jan 30 08:54:06 crc kubenswrapper[4758]: I0130 08:54:06.542233 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 08:54:07 crc kubenswrapper[4758]: I0130 08:54:07.008263 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 08:54:07 crc kubenswrapper[4758]: I0130 08:54:07.079068 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4213c10-dde9-4a4d-9af9-304dd08f755c","Type":"ContainerStarted","Data":"b48e69a863bfe56b316184e825346d6d8c42327738aa1a0e6ca8c7c6ee240f28"} Jan 30 08:54:07 crc kubenswrapper[4758]: I0130 08:54:07.781635 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43adcc0-5ee5-4f9e-be48-03a532f349d5" path="/var/lib/kubelet/pods/f43adcc0-5ee5-4f9e-be48-03a532f349d5/volumes" Jan 30 08:54:08 crc kubenswrapper[4758]: I0130 08:54:08.096688 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4213c10-dde9-4a4d-9af9-304dd08f755c","Type":"ContainerStarted","Data":"1c7e9abae1d0bce4140864c774429f23c5f17433398beab54a53e1b8c1c9cb75"} Jan 30 08:54:08 crc kubenswrapper[4758]: I0130 08:54:08.096734 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4213c10-dde9-4a4d-9af9-304dd08f755c","Type":"ContainerStarted","Data":"88526735d3d629820f39fae8305908f787e6d180fcc7b15039c90f05d5fdfed9"} Jan 30 08:54:08 crc kubenswrapper[4758]: I0130 08:54:08.120760 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.120718243 podStartE2EDuration="2.120718243s" podCreationTimestamp="2026-01-30 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:54:08.117498653 +0000 UTC m=+1453.089810214" watchObservedRunningTime="2026-01-30 08:54:08.120718243 +0000 UTC m=+1453.093029794" Jan 30 08:54:09 crc kubenswrapper[4758]: I0130 08:54:09.442412 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 08:54:11 crc kubenswrapper[4758]: I0130 08:54:11.542641 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:54:11 crc kubenswrapper[4758]: I0130 08:54:11.544631 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 08:54:12 crc kubenswrapper[4758]: I0130 08:54:12.553618 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:54:12 crc kubenswrapper[4758]: I0130 08:54:12.553907 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 08:54:13 crc kubenswrapper[4758]: I0130 08:54:13.564228 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:54:13 crc kubenswrapper[4758]: I0130 08:54:13.564255 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:54:14 crc kubenswrapper[4758]: I0130 08:54:14.444985 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 08:54:14 crc kubenswrapper[4758]: I0130 08:54:14.476034 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 08:54:15 crc kubenswrapper[4758]: I0130 08:54:15.182554 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 08:54:16 crc kubenswrapper[4758]: I0130 08:54:16.544770 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:54:16 crc kubenswrapper[4758]: I0130 08:54:16.545513 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 08:54:17 crc kubenswrapper[4758]: I0130 08:54:17.558231 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e4213c10-dde9-4a4d-9af9-304dd08f755c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:54:17 crc kubenswrapper[4758]: I0130 08:54:17.558242 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e4213c10-dde9-4a4d-9af9-304dd08f755c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 08:54:20 crc kubenswrapper[4758]: I0130 08:54:20.573632 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.389496 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.390161 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.566152 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.569604 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.570006 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 08:54:22 crc kubenswrapper[4758]: I0130 08:54:22.577079 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:54:23 crc kubenswrapper[4758]: I0130 08:54:23.236639 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 08:54:23 crc kubenswrapper[4758]: I0130 08:54:23.246343 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 08:54:23 crc kubenswrapper[4758]: E0130 08:54:23.980864 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="f978baf9-b7c0-4d25-8bca-e95a018ba2af" Jan 30 08:54:24 crc kubenswrapper[4758]: I0130 08:54:24.244406 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:54:26 crc kubenswrapper[4758]: I0130 08:54:26.119952 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:54:26 crc kubenswrapper[4758]: E0130 08:54:26.120175 4758 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 08:54:26 crc kubenswrapper[4758]: E0130 08:54:26.120210 4758 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 08:54:26 crc kubenswrapper[4758]: E0130 08:54:26.120276 4758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift podName:f978baf9-b7c0-4d25-8bca-e95a018ba2af nodeName:}" failed. No retries permitted until 2026-01-30 08:56:28.120254503 +0000 UTC m=+1593.092566054 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift") pod "swift-storage-0" (UID: "f978baf9-b7c0-4d25-8bca-e95a018ba2af") : configmap "swift-ring-files" not found Jan 30 08:54:26 crc kubenswrapper[4758]: I0130 08:54:26.550204 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:54:26 crc kubenswrapper[4758]: I0130 08:54:26.550863 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 08:54:26 crc kubenswrapper[4758]: I0130 08:54:26.555221 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:54:27 crc kubenswrapper[4758]: I0130 08:54:27.272605 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.021246 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ws6z7"] Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.023147 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.025924 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.031699 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.037580 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ws6z7"] Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108563 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108753 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108807 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108833 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108856 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.108920 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210072 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210142 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210169 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210206 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210291 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210342 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.210370 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.211567 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.211904 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.212185 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.215670 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.216654 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.225762 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.227166 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") pod \"swift-ring-rebalance-ws6z7\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.352990 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kt9hk" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.361194 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:44 crc kubenswrapper[4758]: I0130 08:54:44.829755 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ws6z7"] Jan 30 08:54:45 crc kubenswrapper[4758]: I0130 08:54:45.416527 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ws6z7" event={"ID":"0a36303a-ea35-4a12-be39-906481ea247a","Type":"ContainerStarted","Data":"9d3059fcb6e54b15e5a83e28a91ccb5fd27d766b3cd8091bb00825f4c3db1d9a"} Jan 30 08:54:48 crc kubenswrapper[4758]: I0130 08:54:48.448408 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ws6z7" event={"ID":"0a36303a-ea35-4a12-be39-906481ea247a","Type":"ContainerStarted","Data":"4977aaefd3e81d95488932e45781b698031c45e3d8e91e0cfdde1b5ff7bc8926"} Jan 30 08:54:48 crc kubenswrapper[4758]: I0130 08:54:48.472367 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ws6z7" podStartSLOduration=2.2697492759999998 podStartE2EDuration="5.472346712s" podCreationTimestamp="2026-01-30 08:54:43 +0000 UTC" firstStartedPulling="2026-01-30 08:54:44.833243843 +0000 UTC m=+1489.805555414" lastFinishedPulling="2026-01-30 08:54:48.035841299 +0000 UTC m=+1493.008152850" observedRunningTime="2026-01-30 08:54:48.471173605 +0000 UTC m=+1493.443485166" watchObservedRunningTime="2026-01-30 08:54:48.472346712 +0000 UTC m=+1493.444658263" Jan 30 08:54:52 crc kubenswrapper[4758]: I0130 08:54:52.387580 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:54:52 crc kubenswrapper[4758]: I0130 08:54:52.387975 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:54:55 crc kubenswrapper[4758]: I0130 08:54:55.508223 4758 generic.go:334] "Generic (PLEG): container finished" podID="0a36303a-ea35-4a12-be39-906481ea247a" containerID="4977aaefd3e81d95488932e45781b698031c45e3d8e91e0cfdde1b5ff7bc8926" exitCode=0 Jan 30 08:54:55 crc kubenswrapper[4758]: I0130 08:54:55.508330 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ws6z7" event={"ID":"0a36303a-ea35-4a12-be39-906481ea247a","Type":"ContainerDied","Data":"4977aaefd3e81d95488932e45781b698031c45e3d8e91e0cfdde1b5ff7bc8926"} Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.855657 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893350 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893406 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893512 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893582 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893617 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.893652 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") pod \"0a36303a-ea35-4a12-be39-906481ea247a\" (UID: \"0a36303a-ea35-4a12-be39-906481ea247a\") " Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.895064 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.895983 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.910469 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6" (OuterVolumeSpecName: "kube-api-access-m6fb6") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "kube-api-access-m6fb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.920993 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts" (OuterVolumeSpecName: "scripts") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.929511 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.931972 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.933619 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0a36303a-ea35-4a12-be39-906481ea247a" (UID: "0a36303a-ea35-4a12-be39-906481ea247a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996311 4758 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996691 4758 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a36303a-ea35-4a12-be39-906481ea247a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996792 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6fb6\" (UniqueName: \"kubernetes.io/projected/0a36303a-ea35-4a12-be39-906481ea247a-kube-api-access-m6fb6\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996854 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996916 4758 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a36303a-ea35-4a12-be39-906481ea247a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.996987 4758 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:56 crc kubenswrapper[4758]: I0130 08:54:56.997074 4758 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a36303a-ea35-4a12-be39-906481ea247a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 08:54:57 crc kubenswrapper[4758]: I0130 08:54:57.531290 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ws6z7" event={"ID":"0a36303a-ea35-4a12-be39-906481ea247a","Type":"ContainerDied","Data":"9d3059fcb6e54b15e5a83e28a91ccb5fd27d766b3cd8091bb00825f4c3db1d9a"} Jan 30 08:54:57 crc kubenswrapper[4758]: I0130 08:54:57.532239 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3059fcb6e54b15e5a83e28a91ccb5fd27d766b3cd8091bb00825f4c3db1d9a" Jan 30 08:54:57 crc kubenswrapper[4758]: I0130 08:54:57.531355 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ws6z7" Jan 30 08:55:03 crc kubenswrapper[4758]: E0130 08:55:03.991519 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-75f5775999-fhl5h" podUID="c2358e5c-db98-4b7b-8b6c-2e83132655a9" Jan 30 08:55:04 crc kubenswrapper[4758]: I0130 08:55:04.603478 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:05 crc kubenswrapper[4758]: I0130 08:55:05.507110 4758 scope.go:117] "RemoveContainer" containerID="44986833bc4a6d26e1f0961e4b7a2ef317d50d30ae747ee1fe73659d1eb85ff3" Jan 30 08:55:05 crc kubenswrapper[4758]: I0130 08:55:05.541092 4758 scope.go:117] "RemoveContainer" containerID="183015e47f5f83a8e3749301c9da8e5904ca92fc6fc6359bf8ab059b1e015b60" Jan 30 08:55:05 crc kubenswrapper[4758]: I0130 08:55:05.574073 4758 scope.go:117] "RemoveContainer" containerID="86c33d040e9741a4c414c369434d352975d29a3e661c9ee51459ded8965dad1b" Jan 30 08:55:07 crc kubenswrapper[4758]: I0130 08:55:07.905216 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:07 crc kubenswrapper[4758]: I0130 08:55:07.912329 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2358e5c-db98-4b7b-8b6c-2e83132655a9-etc-swift\") pod \"swift-proxy-75f5775999-fhl5h\" (UID: \"c2358e5c-db98-4b7b-8b6c-2e83132655a9\") " pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:08 crc kubenswrapper[4758]: I0130 08:55:08.205475 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:08 crc kubenswrapper[4758]: I0130 08:55:08.806940 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-75f5775999-fhl5h"] Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.671988 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75f5775999-fhl5h" event={"ID":"c2358e5c-db98-4b7b-8b6c-2e83132655a9","Type":"ContainerStarted","Data":"fe112a6cc4c50b51cd660638c4c0b8df89393f9072ed1e4696024d020708d35b"} Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.673278 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.673392 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75f5775999-fhl5h" event={"ID":"c2358e5c-db98-4b7b-8b6c-2e83132655a9","Type":"ContainerStarted","Data":"44dbc54e8ebc6a1c282b44819599e749c1733e58e7d017dd2ffdf2b5e9bc3c68"} Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.673467 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.673543 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-75f5775999-fhl5h" event={"ID":"c2358e5c-db98-4b7b-8b6c-2e83132655a9","Type":"ContainerStarted","Data":"94c3d691a550fe60449faf4eb5a03c55c0201e65c22fa6cce45f06b892ee2d42"} Jan 30 08:55:09 crc kubenswrapper[4758]: I0130 08:55:09.699571 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-75f5775999-fhl5h" podStartSLOduration=252.699555245 podStartE2EDuration="4m12.699555245s" podCreationTimestamp="2026-01-30 08:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:55:09.698986697 +0000 UTC m=+1514.671298248" watchObservedRunningTime="2026-01-30 08:55:09.699555245 +0000 UTC m=+1514.671866796" Jan 30 08:55:18 crc kubenswrapper[4758]: I0130 08:55:18.211150 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:18 crc kubenswrapper[4758]: I0130 08:55:18.211746 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-75f5775999-fhl5h" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.387746 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.388301 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.388358 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.389114 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.389162 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" gracePeriod=600 Jan 30 08:55:22 crc kubenswrapper[4758]: E0130 08:55:22.532294 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.776056 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" exitCode=0 Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.776066 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275"} Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.776499 4758 scope.go:117] "RemoveContainer" containerID="b01f58947342357281d9de34ab8ce6d30a071f097fc6b75d76a38795a72373c2" Jan 30 08:55:22 crc kubenswrapper[4758]: I0130 08:55:22.777053 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:55:22 crc kubenswrapper[4758]: E0130 08:55:22.777367 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:55:37 crc kubenswrapper[4758]: I0130 08:55:37.768980 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:55:37 crc kubenswrapper[4758]: E0130 08:55:37.770031 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.373437 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:55:43 crc kubenswrapper[4758]: E0130 08:55:43.375254 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a36303a-ea35-4a12-be39-906481ea247a" containerName="swift-ring-rebalance" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.375351 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a36303a-ea35-4a12-be39-906481ea247a" containerName="swift-ring-rebalance" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.375585 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a36303a-ea35-4a12-be39-906481ea247a" containerName="swift-ring-rebalance" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.377959 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.399246 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.440183 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.440249 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.440735 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.542951 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.543478 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.543505 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.544198 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.544235 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.572269 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") pod \"community-operators-d5644\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:43 crc kubenswrapper[4758]: I0130 08:55:43.704614 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:44 crc kubenswrapper[4758]: I0130 08:55:44.347776 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:55:44 crc kubenswrapper[4758]: I0130 08:55:44.978817 4758 generic.go:334] "Generic (PLEG): container finished" podID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerID="932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4" exitCode=0 Jan 30 08:55:44 crc kubenswrapper[4758]: I0130 08:55:44.978933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerDied","Data":"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4"} Jan 30 08:55:44 crc kubenswrapper[4758]: I0130 08:55:44.979357 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerStarted","Data":"e5d2968e6be1c8730a9024950dbf5f3e8b53eba5873867bdf9a0d6e741cb3717"} Jan 30 08:55:46 crc kubenswrapper[4758]: I0130 08:55:46.999197 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerStarted","Data":"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6"} Jan 30 08:55:49 crc kubenswrapper[4758]: I0130 08:55:49.019315 4758 generic.go:334] "Generic (PLEG): container finished" podID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerID="faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6" exitCode=0 Jan 30 08:55:49 crc kubenswrapper[4758]: I0130 08:55:49.019376 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerDied","Data":"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6"} Jan 30 08:55:50 crc kubenswrapper[4758]: I0130 08:55:50.034400 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerStarted","Data":"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13"} Jan 30 08:55:50 crc kubenswrapper[4758]: I0130 08:55:50.068790 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5644" podStartSLOduration=2.267292336 podStartE2EDuration="7.0687504s" podCreationTimestamp="2026-01-30 08:55:43 +0000 UTC" firstStartedPulling="2026-01-30 08:55:44.982789834 +0000 UTC m=+1549.955101385" lastFinishedPulling="2026-01-30 08:55:49.784247898 +0000 UTC m=+1554.756559449" observedRunningTime="2026-01-30 08:55:50.057081274 +0000 UTC m=+1555.029392835" watchObservedRunningTime="2026-01-30 08:55:50.0687504 +0000 UTC m=+1555.041061961" Jan 30 08:55:52 crc kubenswrapper[4758]: I0130 08:55:52.768576 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:55:52 crc kubenswrapper[4758]: E0130 08:55:52.769609 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:55:53 crc kubenswrapper[4758]: I0130 08:55:53.705394 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:53 crc kubenswrapper[4758]: I0130 08:55:53.705823 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:55:54 crc kubenswrapper[4758]: I0130 08:55:54.756793 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d5644" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" probeResult="failure" output=< Jan 30 08:55:54 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:55:54 crc kubenswrapper[4758]: > Jan 30 08:56:03 crc kubenswrapper[4758]: I0130 08:56:03.751159 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:56:03 crc kubenswrapper[4758]: I0130 08:56:03.769930 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:03 crc kubenswrapper[4758]: E0130 08:56:03.770583 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:03 crc kubenswrapper[4758]: I0130 08:56:03.808655 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:56:03 crc kubenswrapper[4758]: I0130 08:56:03.993909 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.180364 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5644" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" containerID="cri-o://cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" gracePeriod=2 Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.679980 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.711241 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") pod \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.711391 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") pod \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.711418 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") pod \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\" (UID: \"9d4df89a-d459-4dc8-a19c-3839e9c47a8a\") " Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.712317 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities" (OuterVolumeSpecName: "utilities") pod "9d4df89a-d459-4dc8-a19c-3839e9c47a8a" (UID: "9d4df89a-d459-4dc8-a19c-3839e9c47a8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.722976 4758 scope.go:117] "RemoveContainer" containerID="2c3a333cae2d6b2084a8ea4de5cb47b1fe487040458de9660811e92c533cb616" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.723454 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq" (OuterVolumeSpecName: "kube-api-access-vrmvq") pod "9d4df89a-d459-4dc8-a19c-3839e9c47a8a" (UID: "9d4df89a-d459-4dc8-a19c-3839e9c47a8a"). InnerVolumeSpecName "kube-api-access-vrmvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.770422 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d4df89a-d459-4dc8-a19c-3839e9c47a8a" (UID: "9d4df89a-d459-4dc8-a19c-3839e9c47a8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.799398 4758 scope.go:117] "RemoveContainer" containerID="133e57354834ba4048e5d9ae39382e69423ed21832e9edcfe49d508cca9e97e3" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.816167 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrmvq\" (UniqueName: \"kubernetes.io/projected/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-kube-api-access-vrmvq\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.816421 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.816524 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d4df89a-d459-4dc8-a19c-3839e9c47a8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:05 crc kubenswrapper[4758]: I0130 08:56:05.837490 4758 scope.go:117] "RemoveContainer" containerID="c5be373def2d5d6ba0348da3d6e663da2b77ce86d4b3d39d71e5ba9a890af4be" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.190695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerDied","Data":"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13"} Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.191029 4758 scope.go:117] "RemoveContainer" containerID="cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.190584 4758 generic.go:334] "Generic (PLEG): container finished" podID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerID="cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" exitCode=0 Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.191225 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5644" event={"ID":"9d4df89a-d459-4dc8-a19c-3839e9c47a8a","Type":"ContainerDied","Data":"e5d2968e6be1c8730a9024950dbf5f3e8b53eba5873867bdf9a0d6e741cb3717"} Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.190716 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5644" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.213700 4758 scope.go:117] "RemoveContainer" containerID="faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.221194 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.232214 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5644"] Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.243529 4758 scope.go:117] "RemoveContainer" containerID="932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.261904 4758 scope.go:117] "RemoveContainer" containerID="cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" Jan 30 08:56:06 crc kubenswrapper[4758]: E0130 08:56:06.262683 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13\": container with ID starting with cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13 not found: ID does not exist" containerID="cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.262793 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13"} err="failed to get container status \"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13\": rpc error: code = NotFound desc = could not find container \"cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13\": container with ID starting with cffc3ea0e20f0b9e76ff6cbbf64465dc80b656936f4408b705d761269200ce13 not found: ID does not exist" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.262880 4758 scope.go:117] "RemoveContainer" containerID="faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6" Jan 30 08:56:06 crc kubenswrapper[4758]: E0130 08:56:06.263315 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6\": container with ID starting with faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6 not found: ID does not exist" containerID="faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.263404 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6"} err="failed to get container status \"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6\": rpc error: code = NotFound desc = could not find container \"faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6\": container with ID starting with faab22cdd8aea0a5f50d0c456840076bc92d6f69b765625f21a11aa0a01b51c6 not found: ID does not exist" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.263478 4758 scope.go:117] "RemoveContainer" containerID="932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4" Jan 30 08:56:06 crc kubenswrapper[4758]: E0130 08:56:06.263815 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4\": container with ID starting with 932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4 not found: ID does not exist" containerID="932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4" Jan 30 08:56:06 crc kubenswrapper[4758]: I0130 08:56:06.263857 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4"} err="failed to get container status \"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4\": rpc error: code = NotFound desc = could not find container \"932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4\": container with ID starting with 932126f8db1201c61a5ddf9f64d963e862862a0febf9850362e677ce7aa313b4 not found: ID does not exist" Jan 30 08:56:07 crc kubenswrapper[4758]: I0130 08:56:07.778782 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" path="/var/lib/kubelet/pods/9d4df89a-d459-4dc8-a19c-3839e9c47a8a/volumes" Jan 30 08:56:18 crc kubenswrapper[4758]: I0130 08:56:18.769350 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:18 crc kubenswrapper[4758]: E0130 08:56:18.770687 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.867641 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:22 crc kubenswrapper[4758]: E0130 08:56:22.868668 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.868686 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" Jan 30 08:56:22 crc kubenswrapper[4758]: E0130 08:56:22.868698 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="extract-utilities" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.868706 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="extract-utilities" Jan 30 08:56:22 crc kubenswrapper[4758]: E0130 08:56:22.868741 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="extract-content" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.868750 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="extract-content" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.869013 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4df89a-d459-4dc8-a19c-3839e9c47a8a" containerName="registry-server" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.870715 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.889792 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.958732 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.958820 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:22 crc kubenswrapper[4758]: I0130 08:56:22.958901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.060909 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.061078 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.061165 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.061725 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.061837 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.088994 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") pod \"certified-operators-4k5bg\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.197355 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:23 crc kubenswrapper[4758]: I0130 08:56:23.818158 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:24 crc kubenswrapper[4758]: I0130 08:56:24.390075 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f689102-018e-4401-9a71-d2066434a1b1" containerID="d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed" exitCode=0 Jan 30 08:56:24 crc kubenswrapper[4758]: I0130 08:56:24.390105 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerDied","Data":"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed"} Jan 30 08:56:24 crc kubenswrapper[4758]: I0130 08:56:24.390141 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerStarted","Data":"d3680b93a3f93bc0676cf4233306ad55bc57d711b4f34aed9556d725dac61a80"} Jan 30 08:56:26 crc kubenswrapper[4758]: I0130 08:56:26.407252 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerStarted","Data":"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723"} Jan 30 08:56:27 crc kubenswrapper[4758]: E0130 08:56:27.246017 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="f978baf9-b7c0-4d25-8bca-e95a018ba2af" Jan 30 08:56:27 crc kubenswrapper[4758]: I0130 08:56:27.418292 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f689102-018e-4401-9a71-d2066434a1b1" containerID="12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723" exitCode=0 Jan 30 08:56:27 crc kubenswrapper[4758]: I0130 08:56:27.418997 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:56:27 crc kubenswrapper[4758]: I0130 08:56:27.420636 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerDied","Data":"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723"} Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.177522 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.195597 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f978baf9-b7c0-4d25-8bca-e95a018ba2af-etc-swift\") pod \"swift-storage-0\" (UID: \"f978baf9-b7c0-4d25-8bca-e95a018ba2af\") " pod="openstack/swift-storage-0" Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.321242 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.455133 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerStarted","Data":"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4"} Jan 30 08:56:28 crc kubenswrapper[4758]: I0130 08:56:28.514475 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4k5bg" podStartSLOduration=3.014682733 podStartE2EDuration="6.514450781s" podCreationTimestamp="2026-01-30 08:56:22 +0000 UTC" firstStartedPulling="2026-01-30 08:56:24.391906835 +0000 UTC m=+1589.364218386" lastFinishedPulling="2026-01-30 08:56:27.891674883 +0000 UTC m=+1592.863986434" observedRunningTime="2026-01-30 08:56:28.501781434 +0000 UTC m=+1593.474092985" watchObservedRunningTime="2026-01-30 08:56:28.514450781 +0000 UTC m=+1593.486762342" Jan 30 08:56:29 crc kubenswrapper[4758]: I0130 08:56:29.773984 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:29 crc kubenswrapper[4758]: E0130 08:56:29.774742 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:29 crc kubenswrapper[4758]: I0130 08:56:29.879498 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 08:56:30 crc kubenswrapper[4758]: I0130 08:56:30.478085 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"8703f1e9e8928434c98d4c6305d5a4ec1a5b75cf736b3eed9891cf1dbbf64aee"} Jan 30 08:56:31 crc kubenswrapper[4758]: I0130 08:56:31.490874 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"5ee4528374b12e4014e8b0ebffc8c445fd01cea5224d400e8f5fd76b2d763154"} Jan 30 08:56:32 crc kubenswrapper[4758]: I0130 08:56:32.510793 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"8e4139287ca583faa197b823336a3caf472fda910fb49db2b67f76707b1ab68c"} Jan 30 08:56:32 crc kubenswrapper[4758]: I0130 08:56:32.511235 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"db0950963ea6ed2a744ce1e483a1cb1cf3e0a85d7e27f7d9f80ee8c1422595bd"} Jan 30 08:56:32 crc kubenswrapper[4758]: I0130 08:56:32.511246 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"bf91cdfae8594acd2118eb0120f7c7640e92562bfbec39985d4919b429f6f40f"} Jan 30 08:56:33 crc kubenswrapper[4758]: I0130 08:56:33.198391 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:33 crc kubenswrapper[4758]: I0130 08:56:33.198874 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:33 crc kubenswrapper[4758]: I0130 08:56:33.577242 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"30f6ee3f2270007395ace89a2d9dd21e6d303244a85417e984e7d3f7e7b39f47"} Jan 30 08:56:34 crc kubenswrapper[4758]: I0130 08:56:34.326364 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4k5bg" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" probeResult="failure" output=< Jan 30 08:56:34 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 08:56:34 crc kubenswrapper[4758]: > Jan 30 08:56:34 crc kubenswrapper[4758]: I0130 08:56:34.592029 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"30ce0ad959367cccac320ed766b5baed08467ed4760da1418aed64f0411fc753"} Jan 30 08:56:34 crc kubenswrapper[4758]: I0130 08:56:34.592156 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"fb55e9300448c5a44afa0c1fd1f333132d6c9589c02c72240d2cdd6fb37b7254"} Jan 30 08:56:34 crc kubenswrapper[4758]: I0130 08:56:34.592169 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"fae9c323ae3b7f5a016d334b00edb0a9deb2b042da2a81d063615e2004662bf1"} Jan 30 08:56:36 crc kubenswrapper[4758]: I0130 08:56:36.619608 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"5d40926bc99838128d993cee222a9d9bc5a4e793f7fa6302bf1d9d4c67af178f"} Jan 30 08:56:36 crc kubenswrapper[4758]: I0130 08:56:36.620307 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"4888d37ae86ea83d2b54db692d59e01b14e365ebabf18a9dda7b1778184c04e5"} Jan 30 08:56:36 crc kubenswrapper[4758]: I0130 08:56:36.620323 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"4efeb2a6e7665ad7862a983eb539aa1aa71f10510c7e5ab5517537601fed784d"} Jan 30 08:56:37 crc kubenswrapper[4758]: I0130 08:56:37.681566 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"e6673ae2446935eefe6357be07bc24bba88a18cef45ece23731861c73ca91739"} Jan 30 08:56:37 crc kubenswrapper[4758]: I0130 08:56:37.681928 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"6111c543313b1bac1b03d6d1e86886bb801aeb349fc5e27dff1a34fa381b3628"} Jan 30 08:56:37 crc kubenswrapper[4758]: I0130 08:56:37.681948 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"390b1861bb92e08bbc3f8bb823cd442b17f5ad4a0df2559d89540b8c341bbdeb"} Jan 30 08:56:38 crc kubenswrapper[4758]: I0130 08:56:38.699126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f978baf9-b7c0-4d25-8bca-e95a018ba2af","Type":"ContainerStarted","Data":"f512255a0c0c8def3a189698c8082c92e06dfea7350e8de8021ff589db4a383d"} Jan 30 08:56:38 crc kubenswrapper[4758]: I0130 08:56:38.755064 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=500.723413615 podStartE2EDuration="8m26.755027332s" podCreationTimestamp="2026-01-30 08:48:12 +0000 UTC" firstStartedPulling="2026-01-30 08:56:29.894867312 +0000 UTC m=+1594.867178873" lastFinishedPulling="2026-01-30 08:56:35.926481039 +0000 UTC m=+1600.898792590" observedRunningTime="2026-01-30 08:56:38.746573817 +0000 UTC m=+1603.718885398" watchObservedRunningTime="2026-01-30 08:56:38.755027332 +0000 UTC m=+1603.727338883" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.109712 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.112447 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.116241 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.125957 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245141 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245230 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245490 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245562 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.245948 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.246223 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.348270 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.348341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.348396 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.348459 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349786 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349800 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349860 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349806 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.349986 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.350146 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.350514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.377349 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") pod \"dnsmasq-dns-5c8fb88b59-pmngp\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:39 crc kubenswrapper[4758]: I0130 08:56:39.434249 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:40 crc kubenswrapper[4758]: I0130 08:56:40.080565 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:56:40 crc kubenswrapper[4758]: I0130 08:56:40.723706 4758 generic.go:334] "Generic (PLEG): container finished" podID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerID="a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb" exitCode=0 Jan 30 08:56:40 crc kubenswrapper[4758]: I0130 08:56:40.723806 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerDied","Data":"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb"} Jan 30 08:56:40 crc kubenswrapper[4758]: I0130 08:56:40.724071 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerStarted","Data":"a0aeac30b993bcb0e0326076acc7ca6b11ca6918894e3364821c28d868e8c055"} Jan 30 08:56:41 crc kubenswrapper[4758]: I0130 08:56:41.736208 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerStarted","Data":"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0"} Jan 30 08:56:41 crc kubenswrapper[4758]: I0130 08:56:41.736588 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:41 crc kubenswrapper[4758]: I0130 08:56:41.762602 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" podStartSLOduration=2.762582173 podStartE2EDuration="2.762582173s" podCreationTimestamp="2026-01-30 08:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:56:41.75548216 +0000 UTC m=+1606.727793731" watchObservedRunningTime="2026-01-30 08:56:41.762582173 +0000 UTC m=+1606.734893724" Jan 30 08:56:43 crc kubenswrapper[4758]: I0130 08:56:43.245178 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:43 crc kubenswrapper[4758]: I0130 08:56:43.297949 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:43 crc kubenswrapper[4758]: I0130 08:56:43.484193 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:44 crc kubenswrapper[4758]: I0130 08:56:44.762269 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4k5bg" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" containerID="cri-o://f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" gracePeriod=2 Jan 30 08:56:44 crc kubenswrapper[4758]: I0130 08:56:44.769074 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:44 crc kubenswrapper[4758]: E0130 08:56:44.769505 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.233988 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.288104 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") pod \"7f689102-018e-4401-9a71-d2066434a1b1\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.288296 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") pod \"7f689102-018e-4401-9a71-d2066434a1b1\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.288388 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") pod \"7f689102-018e-4401-9a71-d2066434a1b1\" (UID: \"7f689102-018e-4401-9a71-d2066434a1b1\") " Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.288802 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities" (OuterVolumeSpecName: "utilities") pod "7f689102-018e-4401-9a71-d2066434a1b1" (UID: "7f689102-018e-4401-9a71-d2066434a1b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.295358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh" (OuterVolumeSpecName: "kube-api-access-kwnsh") pod "7f689102-018e-4401-9a71-d2066434a1b1" (UID: "7f689102-018e-4401-9a71-d2066434a1b1"). InnerVolumeSpecName "kube-api-access-kwnsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.391170 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.391218 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwnsh\" (UniqueName: \"kubernetes.io/projected/7f689102-018e-4401-9a71-d2066434a1b1-kube-api-access-kwnsh\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.399670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f689102-018e-4401-9a71-d2066434a1b1" (UID: "7f689102-018e-4401-9a71-d2066434a1b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.496426 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f689102-018e-4401-9a71-d2066434a1b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.775679 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f689102-018e-4401-9a71-d2066434a1b1" containerID="f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" exitCode=0 Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.775775 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5bg" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.798371 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerDied","Data":"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4"} Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.798431 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5bg" event={"ID":"7f689102-018e-4401-9a71-d2066434a1b1","Type":"ContainerDied","Data":"d3680b93a3f93bc0676cf4233306ad55bc57d711b4f34aed9556d725dac61a80"} Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.798456 4758 scope.go:117] "RemoveContainer" containerID="f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.843123 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.850704 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4k5bg"] Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.863919 4758 scope.go:117] "RemoveContainer" containerID="12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.916475 4758 scope.go:117] "RemoveContainer" containerID="d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.963175 4758 scope.go:117] "RemoveContainer" containerID="f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" Jan 30 08:56:45 crc kubenswrapper[4758]: E0130 08:56:45.963762 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4\": container with ID starting with f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4 not found: ID does not exist" containerID="f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.963801 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4"} err="failed to get container status \"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4\": rpc error: code = NotFound desc = could not find container \"f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4\": container with ID starting with f1d82afb0cbb3f638ce31ffd5d981913b4b07bbd4baf701d40bf74e4e41849d4 not found: ID does not exist" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.963829 4758 scope.go:117] "RemoveContainer" containerID="12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723" Jan 30 08:56:45 crc kubenswrapper[4758]: E0130 08:56:45.964281 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723\": container with ID starting with 12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723 not found: ID does not exist" containerID="12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.964300 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723"} err="failed to get container status \"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723\": rpc error: code = NotFound desc = could not find container \"12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723\": container with ID starting with 12599d104732af671b16034360d5d77616c0a5e7d0c899a1781ef2d0487a9723 not found: ID does not exist" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.964312 4758 scope.go:117] "RemoveContainer" containerID="d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed" Jan 30 08:56:45 crc kubenswrapper[4758]: E0130 08:56:45.964593 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed\": container with ID starting with d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed not found: ID does not exist" containerID="d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed" Jan 30 08:56:45 crc kubenswrapper[4758]: I0130 08:56:45.964617 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed"} err="failed to get container status \"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed\": rpc error: code = NotFound desc = could not find container \"d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed\": container with ID starting with d24410fc872499de31d8c3f495bf07c2d342507b8e72213b169bad7e8a6a98ed not found: ID does not exist" Jan 30 08:56:47 crc kubenswrapper[4758]: I0130 08:56:47.778655 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f689102-018e-4401-9a71-d2066434a1b1" path="/var/lib/kubelet/pods/7f689102-018e-4401-9a71-d2066434a1b1/volumes" Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.435250 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.531523 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.531940 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="dnsmasq-dns" containerID="cri-o://6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34" gracePeriod=10 Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.833676 4758 generic.go:334] "Generic (PLEG): container finished" podID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerID="6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34" exitCode=0 Jan 30 08:56:49 crc kubenswrapper[4758]: I0130 08:56:49.833730 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerDied","Data":"6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34"} Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.105972 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.225880 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.225935 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.225993 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.226097 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.226173 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") pod \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\" (UID: \"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe\") " Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.233229 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6" (OuterVolumeSpecName: "kube-api-access-jlpl6") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "kube-api-access-jlpl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.326448 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.329363 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlpl6\" (UniqueName: \"kubernetes.io/projected/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-kube-api-access-jlpl6\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.329412 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.337544 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config" (OuterVolumeSpecName: "config") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.347471 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.366874 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" (UID: "cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.432310 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.432355 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.432369 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.845668 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" event={"ID":"cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe","Type":"ContainerDied","Data":"ee6038add08fd771bda22b40e423fff940fc9b5d99566b28b9e2c028c0c8fa85"} Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.846111 4758 scope.go:117] "RemoveContainer" containerID="6e12455a439f0f64e5942ea1c92f66b043c960a09d5ba6c660cf45634c75cc34" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.845732 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5459cb87c-dlx4d" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.868474 4758 scope.go:117] "RemoveContainer" containerID="c6c147de1a4a6872b323ab195d60eaec8350ddff6728805d455fbf3cd9db1508" Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.890449 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:56:50 crc kubenswrapper[4758]: I0130 08:56:50.907854 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5459cb87c-dlx4d"] Jan 30 08:56:51 crc kubenswrapper[4758]: I0130 08:56:51.778780 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" path="/var/lib/kubelet/pods/cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe/volumes" Jan 30 08:56:55 crc kubenswrapper[4758]: I0130 08:56:55.776103 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:56:55 crc kubenswrapper[4758]: E0130 08:56:55.778856 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:56:57 crc kubenswrapper[4758]: I0130 08:56:57.786806 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:56:59 crc kubenswrapper[4758]: I0130 08:56:59.741212 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:03 crc kubenswrapper[4758]: I0130 08:57:03.448863 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" containerID="cri-o://dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" gracePeriod=604795 Jan 30 08:57:04 crc kubenswrapper[4758]: I0130 08:57:04.463587 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" containerID="cri-o://cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9" gracePeriod=604796 Jan 30 08:57:05 crc kubenswrapper[4758]: I0130 08:57:05.903835 4758 scope.go:117] "RemoveContainer" containerID="1929e27981ebbdc0f15097ba4c6e8187f43c6e486ed49a85e5cf2d09718e26d7" Jan 30 08:57:05 crc kubenswrapper[4758]: I0130 08:57:05.935929 4758 scope.go:117] "RemoveContainer" containerID="056805156e46cbccce9c026e2d720a807e1f2530c8afc8d6afbacc4cb099539b" Jan 30 08:57:08 crc kubenswrapper[4758]: I0130 08:57:08.103158 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 30 08:57:08 crc kubenswrapper[4758]: I0130 08:57:08.256295 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.000206 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.020984 4758 generic.go:334] "Generic (PLEG): container finished" podID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerID="dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" exitCode=0 Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.021055 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerDied","Data":"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba"} Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.021092 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"89ff2fc5-609f-4ca7-b997-9f8adfa5a221","Type":"ContainerDied","Data":"8f4809dc1b13b9e08c7692a160b88862bacf3c8f0bf775e5a671d7ecfeb7b0f7"} Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.021120 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.021133 4758 scope.go:117] "RemoveContainer" containerID="dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.043966 4758 scope.go:117] "RemoveContainer" containerID="80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.083146 4758 scope.go:117] "RemoveContainer" containerID="dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.094478 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba\": container with ID starting with dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba not found: ID does not exist" containerID="dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.094529 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba"} err="failed to get container status \"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba\": rpc error: code = NotFound desc = could not find container \"dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba\": container with ID starting with dd13d076171a8cc00d1338f0130b3de1ce2a5f61bec80c46d94ebc312b54dbba not found: ID does not exist" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.094555 4758 scope.go:117] "RemoveContainer" containerID="80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.096104 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293\": container with ID starting with 80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293 not found: ID does not exist" containerID="80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.096182 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293"} err="failed to get container status \"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293\": rpc error: code = NotFound desc = could not find container \"80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293\": container with ID starting with 80e96950626e504fe875bcc468a52846e11e70b49b180067173ad891f6436293 not found: ID does not exist" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.109896 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.109960 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110019 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110042 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110151 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110176 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110198 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110276 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110300 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110374 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.110390 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\" (UID: \"89ff2fc5-609f-4ca7-b997-9f8adfa5a221\") " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.112565 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.114604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.141865 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.143197 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.150165 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297" (OuterVolumeSpecName: "kube-api-access-7p297") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "kube-api-access-7p297". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.154766 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.187601 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.189301 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info" (OuterVolumeSpecName: "pod-info") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214505 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214533 4758 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214543 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p297\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-kube-api-access-7p297\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214554 4758 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214563 4758 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214585 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214594 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.214603 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.251257 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf" (OuterVolumeSpecName: "server-conf") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.255146 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data" (OuterVolumeSpecName: "config-data") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.285260 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.292993 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "89ff2fc5-609f-4ca7-b997-9f8adfa5a221" (UID: "89ff2fc5-609f-4ca7-b997-9f8adfa5a221"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.318318 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.318356 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.318368 4758 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89ff2fc5-609f-4ca7-b997-9f8adfa5a221-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.318377 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.414190 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.437218 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456275 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456787 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="init" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456815 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="init" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456829 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="extract-utilities" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456838 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="extract-utilities" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456861 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456867 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456887 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="setup-container" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456895 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="setup-container" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456903 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="extract-content" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456909 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="extract-content" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456925 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="dnsmasq-dns" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456931 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="dnsmasq-dns" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.456943 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.456948 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.457153 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f689102-018e-4401-9a71-d2066434a1b1" containerName="registry-server" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.457173 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" containerName="rabbitmq" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.457185 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfeefcd5-bbc0-4036-b3b9-4a3485dd9ffe" containerName="dnsmasq-dns" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.458205 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.467718 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.467982 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8zbw4" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468225 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468270 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468496 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468615 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.468632 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.493142 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632809 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632900 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c72311ae-5d7e-4978-a690-a9bee0b3672b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632925 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632944 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.632973 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633147 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-config-data\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633216 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633265 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpptg\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-kube-api-access-cpptg\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633374 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633400 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c72311ae-5d7e-4978-a690-a9bee0b3672b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.633417 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735522 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-config-data\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735619 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735689 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpptg\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-kube-api-access-cpptg\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735743 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735769 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c72311ae-5d7e-4978-a690-a9bee0b3672b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735792 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735897 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735957 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c72311ae-5d7e-4978-a690-a9bee0b3672b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.735986 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.736018 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.736514 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737008 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-config-data\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737379 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737754 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.737760 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c72311ae-5d7e-4978-a690-a9bee0b3672b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.741758 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c72311ae-5d7e-4978-a690-a9bee0b3672b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.759103 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.759344 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.763261 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c72311ae-5d7e-4978-a690-a9bee0b3672b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.766632 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpptg\" (UniqueName: \"kubernetes.io/projected/c72311ae-5d7e-4978-a690-a9bee0b3672b-kube-api-access-cpptg\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.770027 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:57:10 crc kubenswrapper[4758]: E0130 08:57:10.770395 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:57:10 crc kubenswrapper[4758]: I0130 08:57:10.830464 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c72311ae-5d7e-4978-a690-a9bee0b3672b\") " pod="openstack/rabbitmq-server-0" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.010882 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.012902 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.017256 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.018634 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.042752 4758 generic.go:334] "Generic (PLEG): container finished" podID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerID="cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9" exitCode=0 Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.042789 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerDied","Data":"cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9"} Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.069039 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.069093 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071566 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071619 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071840 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071925 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.071992 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.093312 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.179230 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180257 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180623 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180661 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180700 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180740 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180767 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.180193 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.181787 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.182353 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.182875 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.189070 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.194195 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.220006 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") pod \"dnsmasq-dns-5974d6465c-97ckq\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.228497 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387010 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387068 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387088 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387127 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387238 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387329 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387362 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387388 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387423 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387446 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.387532 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") pod \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\" (UID: \"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5\") " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.390349 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.390637 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.396010 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.402082 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.402166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.402621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.409172 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.415232 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz" (OuterVolumeSpecName: "kube-api-access-xslvz") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "kube-api-access-xslvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.417033 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info" (OuterVolumeSpecName: "pod-info") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.490392 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491042 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491158 4758 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491210 4758 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491270 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xslvz\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-kube-api-access-xslvz\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491335 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491385 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.491435 4758 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.541835 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data" (OuterVolumeSpecName: "config-data") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.557698 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf" (OuterVolumeSpecName: "server-conf") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.593337 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.593389 4758 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.612345 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.696811 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.700311 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" (UID: "7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.792859 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ff2fc5-609f-4ca7-b997-9f8adfa5a221" path="/var/lib/kubelet/pods/89ff2fc5-609f-4ca7-b997-9f8adfa5a221/volumes" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.800948 4758 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:11 crc kubenswrapper[4758]: I0130 08:57:11.845866 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.052966 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c72311ae-5d7e-4978-a690-a9bee0b3672b","Type":"ContainerStarted","Data":"c3e7defacd392862d3dfefe46607e7c14ff9fc9486f9a510a8c91c79b27f281e"} Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.055334 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5","Type":"ContainerDied","Data":"2885eef36f4c15f0a02d92953e9ac8c27214898712fb29c24b72fc2ee76d019d"} Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.055599 4758 scope.go:117] "RemoveContainer" containerID="cb775ec2ee99ffed411c83db7c8c8f39801fd5654096db2c76a7443041b48ca9" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.055592 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.069172 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:12 crc kubenswrapper[4758]: W0130 08:57:12.073532 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0bdec68_2725_4515_b6f0_dc421cec4a6d.slice/crio-875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8 WatchSource:0}: Error finding container 875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8: Status 404 returned error can't find the container with id 875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8 Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.081195 4758 scope.go:117] "RemoveContainer" containerID="858a4cc294f2673581b5056b6b3f2795b013fb5990368406beb8a506660b666f" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.093470 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.103393 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.127729 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: E0130 08:57:12.128263 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="setup-container" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.128287 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="setup-container" Jan 30 08:57:12 crc kubenswrapper[4758]: E0130 08:57:12.128317 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.128326 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.128560 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" containerName="rabbitmq" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.133413 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.137405 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.137856 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138615 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nwnvg" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138650 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138757 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.138801 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.147003 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210288 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0e1aed-0ac5-4482-906f-89c9243729ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210346 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210389 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210419 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210438 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210472 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcqj\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-kube-api-access-xhcqj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210494 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0e1aed-0ac5-4482-906f-89c9243729ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210529 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210552 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210569 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.210597 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312242 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0e1aed-0ac5-4482-906f-89c9243729ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312315 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312407 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312425 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312461 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcqj\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-kube-api-access-xhcqj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312493 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0e1aed-0ac5-4482-906f-89c9243729ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312517 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312559 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.312588 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313465 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313553 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313652 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313856 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.313999 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.314167 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0e1aed-0ac5-4482-906f-89c9243729ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.396980 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0e1aed-0ac5-4482-906f-89c9243729ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.397634 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.398341 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.398908 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0e1aed-0ac5-4482-906f-89c9243729ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.399066 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcqj\" (UniqueName: \"kubernetes.io/projected/ec0e1aed-0ac5-4482-906f-89c9243729ea-kube-api-access-xhcqj\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.437071 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec0e1aed-0ac5-4482-906f-89c9243729ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.472241 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:12 crc kubenswrapper[4758]: I0130 08:57:12.792587 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.064769 4758 generic.go:334] "Generic (PLEG): container finished" podID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerID="70834c4c0f2e5700560407708799402eb1ed5ab917b05c871599639beb6a62b7" exitCode=0 Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.064844 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerDied","Data":"70834c4c0f2e5700560407708799402eb1ed5ab917b05c871599639beb6a62b7"} Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.064873 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerStarted","Data":"875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8"} Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.069184 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec0e1aed-0ac5-4482-906f-89c9243729ea","Type":"ContainerStarted","Data":"38f4b6057dda17857234eb47990f19c49f4f2cf41a81db9065c70f1cc114ff6a"} Jan 30 08:57:13 crc kubenswrapper[4758]: I0130 08:57:13.779662 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5" path="/var/lib/kubelet/pods/7f2ca5f3-0ec7-48e5-a93d-95d1205d22d5/volumes" Jan 30 08:57:14 crc kubenswrapper[4758]: I0130 08:57:14.079308 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c72311ae-5d7e-4978-a690-a9bee0b3672b","Type":"ContainerStarted","Data":"2594c87d64066e1c3d6a70cb5a8bd66af20a2abcdf92c45da3629b70a6baade1"} Jan 30 08:57:14 crc kubenswrapper[4758]: I0130 08:57:14.082355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerStarted","Data":"f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a"} Jan 30 08:57:14 crc kubenswrapper[4758]: I0130 08:57:14.083029 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:15 crc kubenswrapper[4758]: I0130 08:57:15.092601 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec0e1aed-0ac5-4482-906f-89c9243729ea","Type":"ContainerStarted","Data":"d1c642bff2555442bfe6d2153832a4b302a1b8e531a2b0373b69b695dfe508bc"} Jan 30 08:57:15 crc kubenswrapper[4758]: I0130 08:57:15.121335 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" podStartSLOduration=5.121313233 podStartE2EDuration="5.121313233s" podCreationTimestamp="2026-01-30 08:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:57:14.132204256 +0000 UTC m=+1639.104515807" watchObservedRunningTime="2026-01-30 08:57:15.121313233 +0000 UTC m=+1640.093624794" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.277952 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.283238 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.290097 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.328806 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.328895 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.328954 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.430294 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.430353 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.430392 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.431100 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.431130 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.462891 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") pod \"redhat-marketplace-bcpjf\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:17 crc kubenswrapper[4758]: I0130 08:57:17.620266 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:18 crc kubenswrapper[4758]: I0130 08:57:18.184461 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:18 crc kubenswrapper[4758]: W0130 08:57:18.189239 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3009f89_995c_4ed2_81e9_3b8e76c970d0.slice/crio-4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15 WatchSource:0}: Error finding container 4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15: Status 404 returned error can't find the container with id 4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15 Jan 30 08:57:19 crc kubenswrapper[4758]: I0130 08:57:19.129566 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerID="cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298" exitCode=0 Jan 30 08:57:19 crc kubenswrapper[4758]: I0130 08:57:19.129648 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerDied","Data":"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298"} Jan 30 08:57:19 crc kubenswrapper[4758]: I0130 08:57:19.129862 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerStarted","Data":"4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15"} Jan 30 08:57:19 crc kubenswrapper[4758]: I0130 08:57:19.133121 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.150879 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerID="9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f" exitCode=0 Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.151060 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerDied","Data":"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f"} Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.405244 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.493688 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.493979 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="dnsmasq-dns" containerID="cri-o://38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" gracePeriod=10 Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.721009 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549cc57c95-cpkk5"] Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.722617 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.750191 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549cc57c95-cpkk5"] Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.829908 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-nb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830228 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-svc\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830304 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvxk\" (UniqueName: \"kubernetes.io/projected/cdf54085-1dd0-4eb1-9640-e75c69be5a44-kube-api-access-9kvxk\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830328 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-swift-storage-0\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830609 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-openstack-edpm-ipam\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830668 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-sb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.830696 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-config\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932378 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-nb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932484 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-svc\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932572 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvxk\" (UniqueName: \"kubernetes.io/projected/cdf54085-1dd0-4eb1-9640-e75c69be5a44-kube-api-access-9kvxk\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932601 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-swift-storage-0\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932654 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-openstack-edpm-ipam\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932676 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-sb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.932693 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-config\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.933591 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-config\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.934269 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-nb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.936339 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-swift-storage-0\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.936771 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-openstack-edpm-ipam\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.936819 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-dns-svc\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.938531 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdf54085-1dd0-4eb1-9640-e75c69be5a44-ovsdbserver-sb\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:21 crc kubenswrapper[4758]: I0130 08:57:21.961455 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvxk\" (UniqueName: \"kubernetes.io/projected/cdf54085-1dd0-4eb1-9640-e75c69be5a44-kube-api-access-9kvxk\") pod \"dnsmasq-dns-549cc57c95-cpkk5\" (UID: \"cdf54085-1dd0-4eb1-9640-e75c69be5a44\") " pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.042186 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.147287 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200306 4758 generic.go:334] "Generic (PLEG): container finished" podID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerID="38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" exitCode=0 Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200360 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerDied","Data":"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0"} Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200391 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" event={"ID":"846a15fd-f202-4eaf-b346-b66916daa7d1","Type":"ContainerDied","Data":"a0aeac30b993bcb0e0326076acc7ca6b11ca6918894e3364821c28d868e8c055"} Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200410 4758 scope.go:117] "RemoveContainer" containerID="38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.200574 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c8fb88b59-pmngp" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.275370 4758 scope.go:117] "RemoveContainer" containerID="a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.327307 4758 scope.go:117] "RemoveContainer" containerID="38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" Jan 30 08:57:22 crc kubenswrapper[4758]: E0130 08:57:22.327993 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0\": container with ID starting with 38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0 not found: ID does not exist" containerID="38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.328025 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0"} err="failed to get container status \"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0\": rpc error: code = NotFound desc = could not find container \"38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0\": container with ID starting with 38c9ddb9d1d9ec11b27c5acf9ccc1349969a49f77c4f35ee5f14427a9b9667c0 not found: ID does not exist" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.328071 4758 scope.go:117] "RemoveContainer" containerID="a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb" Jan 30 08:57:22 crc kubenswrapper[4758]: E0130 08:57:22.331119 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb\": container with ID starting with a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb not found: ID does not exist" containerID="a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.331177 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb"} err="failed to get container status \"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb\": rpc error: code = NotFound desc = could not find container \"a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb\": container with ID starting with a8123ae1ad385cd64277688722310c6ff8fbf4fc2718a81a529af6d8ccfcdffb not found: ID does not exist" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.342941 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343009 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343142 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343191 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343284 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.343309 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.353227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj" (OuterVolumeSpecName: "kube-api-access-r7xsj") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "kube-api-access-r7xsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.407189 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.418415 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config" (OuterVolumeSpecName: "config") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.425166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: W0130 08:57:22.430249 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdf54085_1dd0_4eb1_9640_e75c69be5a44.slice/crio-33d3901a476edeed4cacb18dd3ad0e70806fb9791f11013195e07aea23d5b852 WatchSource:0}: Error finding container 33d3901a476edeed4cacb18dd3ad0e70806fb9791f11013195e07aea23d5b852: Status 404 returned error can't find the container with id 33d3901a476edeed4cacb18dd3ad0e70806fb9791f11013195e07aea23d5b852 Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.433725 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549cc57c95-cpkk5"] Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.440689 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.444589 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.444738 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") pod \"846a15fd-f202-4eaf-b346-b66916daa7d1\" (UID: \"846a15fd-f202-4eaf-b346-b66916daa7d1\") " Jan 30 08:57:22 crc kubenswrapper[4758]: W0130 08:57:22.444964 4758 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/846a15fd-f202-4eaf-b346-b66916daa7d1/volumes/kubernetes.io~configmap/dns-swift-storage-0 Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.444980 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "846a15fd-f202-4eaf-b346-b66916daa7d1" (UID: "846a15fd-f202-4eaf-b346-b66916daa7d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445455 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445478 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7xsj\" (UniqueName: \"kubernetes.io/projected/846a15fd-f202-4eaf-b346-b66916daa7d1-kube-api-access-r7xsj\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445495 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445506 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445517 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.445528 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846a15fd-f202-4eaf-b346-b66916daa7d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.556587 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:57:22 crc kubenswrapper[4758]: I0130 08:57:22.574864 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c8fb88b59-pmngp"] Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.211596 4758 generic.go:334] "Generic (PLEG): container finished" podID="cdf54085-1dd0-4eb1-9640-e75c69be5a44" containerID="3b6a35e07ed137c26f25f1b0a851b95f307d71d7b1ca35f3be1a3b3596176f32" exitCode=0 Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.211695 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" event={"ID":"cdf54085-1dd0-4eb1-9640-e75c69be5a44","Type":"ContainerDied","Data":"3b6a35e07ed137c26f25f1b0a851b95f307d71d7b1ca35f3be1a3b3596176f32"} Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.212017 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" event={"ID":"cdf54085-1dd0-4eb1-9640-e75c69be5a44","Type":"ContainerStarted","Data":"33d3901a476edeed4cacb18dd3ad0e70806fb9791f11013195e07aea23d5b852"} Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.215146 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerStarted","Data":"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9"} Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.339484 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bcpjf" podStartSLOduration=3.652808698 podStartE2EDuration="6.339458389s" podCreationTimestamp="2026-01-30 08:57:17 +0000 UTC" firstStartedPulling="2026-01-30 08:57:19.132883274 +0000 UTC m=+1644.105194815" lastFinishedPulling="2026-01-30 08:57:21.819532955 +0000 UTC m=+1646.791844506" observedRunningTime="2026-01-30 08:57:23.288791456 +0000 UTC m=+1648.261103037" watchObservedRunningTime="2026-01-30 08:57:23.339458389 +0000 UTC m=+1648.311769940" Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.769185 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:57:23 crc kubenswrapper[4758]: E0130 08:57:23.769798 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:57:23 crc kubenswrapper[4758]: I0130 08:57:23.782852 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" path="/var/lib/kubelet/pods/846a15fd-f202-4eaf-b346-b66916daa7d1/volumes" Jan 30 08:57:24 crc kubenswrapper[4758]: I0130 08:57:24.227332 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" event={"ID":"cdf54085-1dd0-4eb1-9640-e75c69be5a44","Type":"ContainerStarted","Data":"063d8bd7528aa4601f74ef5a25c0b2f9f965f92f19a8a77ac171f162cfc1e542"} Jan 30 08:57:24 crc kubenswrapper[4758]: I0130 08:57:24.259127 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" podStartSLOduration=3.259110975 podStartE2EDuration="3.259110975s" podCreationTimestamp="2026-01-30 08:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:57:24.247778911 +0000 UTC m=+1649.220090472" watchObservedRunningTime="2026-01-30 08:57:24.259110975 +0000 UTC m=+1649.231422536" Jan 30 08:57:25 crc kubenswrapper[4758]: I0130 08:57:25.234515 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:27 crc kubenswrapper[4758]: I0130 08:57:27.620825 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:27 crc kubenswrapper[4758]: I0130 08:57:27.621393 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:27 crc kubenswrapper[4758]: I0130 08:57:27.681986 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:28 crc kubenswrapper[4758]: I0130 08:57:28.310404 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:28 crc kubenswrapper[4758]: I0130 08:57:28.371535 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.272259 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bcpjf" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="registry-server" containerID="cri-o://f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" gracePeriod=2 Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.702301 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.813239 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") pod \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.813377 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") pod \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.813508 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") pod \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\" (UID: \"e3009f89-995c-4ed2-81e9-3b8e76c970d0\") " Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.815243 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities" (OuterVolumeSpecName: "utilities") pod "e3009f89-995c-4ed2-81e9-3b8e76c970d0" (UID: "e3009f89-995c-4ed2-81e9-3b8e76c970d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.824878 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm" (OuterVolumeSpecName: "kube-api-access-4kqtm") pod "e3009f89-995c-4ed2-81e9-3b8e76c970d0" (UID: "e3009f89-995c-4ed2-81e9-3b8e76c970d0"). InnerVolumeSpecName "kube-api-access-4kqtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.906170 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3009f89-995c-4ed2-81e9-3b8e76c970d0" (UID: "e3009f89-995c-4ed2-81e9-3b8e76c970d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.916355 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kqtm\" (UniqueName: \"kubernetes.io/projected/e3009f89-995c-4ed2-81e9-3b8e76c970d0-kube-api-access-4kqtm\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.916539 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:30 crc kubenswrapper[4758]: I0130 08:57:30.916596 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3009f89-995c-4ed2-81e9-3b8e76c970d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.281860 4758 generic.go:334] "Generic (PLEG): container finished" podID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerID="f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" exitCode=0 Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.281911 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerDied","Data":"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9"} Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.282168 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcpjf" event={"ID":"e3009f89-995c-4ed2-81e9-3b8e76c970d0","Type":"ContainerDied","Data":"4a05885829e22ba308af7bf44752f9e5e5cfe220f6ad236e71b15d0f3d157c15"} Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.282192 4758 scope.go:117] "RemoveContainer" containerID="f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.282210 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcpjf" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.315764 4758 scope.go:117] "RemoveContainer" containerID="9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.332942 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.346240 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcpjf"] Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.357688 4758 scope.go:117] "RemoveContainer" containerID="cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.386951 4758 scope.go:117] "RemoveContainer" containerID="f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" Jan 30 08:57:31 crc kubenswrapper[4758]: E0130 08:57:31.387843 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9\": container with ID starting with f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9 not found: ID does not exist" containerID="f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.387896 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9"} err="failed to get container status \"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9\": rpc error: code = NotFound desc = could not find container \"f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9\": container with ID starting with f4065ca4d3937daadd912e755b3868bee496c5fc1a69a2c169a20d45e837f9f9 not found: ID does not exist" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.387932 4758 scope.go:117] "RemoveContainer" containerID="9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f" Jan 30 08:57:31 crc kubenswrapper[4758]: E0130 08:57:31.388534 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f\": container with ID starting with 9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f not found: ID does not exist" containerID="9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.388592 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f"} err="failed to get container status \"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f\": rpc error: code = NotFound desc = could not find container \"9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f\": container with ID starting with 9aefb316121ce8c775a3824690cfdc90354d24ca7a454646ce4a6d9835651e2f not found: ID does not exist" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.388621 4758 scope.go:117] "RemoveContainer" containerID="cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298" Jan 30 08:57:31 crc kubenswrapper[4758]: E0130 08:57:31.389102 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298\": container with ID starting with cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298 not found: ID does not exist" containerID="cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.389135 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298"} err="failed to get container status \"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298\": rpc error: code = NotFound desc = could not find container \"cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298\": container with ID starting with cec3d4f5017f79fa464deb00af2024160a9d9bf6e7992f61f676ee3a05a8c298 not found: ID does not exist" Jan 30 08:57:31 crc kubenswrapper[4758]: I0130 08:57:31.782605 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" path="/var/lib/kubelet/pods/e3009f89-995c-4ed2-81e9-3b8e76c970d0/volumes" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.044419 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549cc57c95-cpkk5" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.137082 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.137393 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="dnsmasq-dns" containerID="cri-o://f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a" gracePeriod=10 Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.304513 4758 generic.go:334] "Generic (PLEG): container finished" podID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerID="f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a" exitCode=0 Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.304590 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerDied","Data":"f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a"} Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.633498 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.766538 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.766690 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.766755 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.766784 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.767451 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.767515 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.767626 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") pod \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\" (UID: \"f0bdec68-2725-4515-b6f0-dc421cec4a6d\") " Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.781114 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf" (OuterVolumeSpecName: "kube-api-access-886xf") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "kube-api-access-886xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.831483 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.839397 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config" (OuterVolumeSpecName: "config") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.848488 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.850591 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.850825 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.853509 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f0bdec68-2725-4515-b6f0-dc421cec4a6d" (UID: "f0bdec68-2725-4515-b6f0-dc421cec4a6d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870152 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870257 4758 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870272 4758 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870286 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-886xf\" (UniqueName: \"kubernetes.io/projected/f0bdec68-2725-4515-b6f0-dc421cec4a6d-kube-api-access-886xf\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870298 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870309 4758 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-config\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:32 crc kubenswrapper[4758]: I0130 08:57:32.870327 4758 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0bdec68-2725-4515-b6f0-dc421cec4a6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.317676 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" event={"ID":"f0bdec68-2725-4515-b6f0-dc421cec4a6d","Type":"ContainerDied","Data":"875a1e7b2f28ea73f14e324df5b8439b48c8b55804c659cf3b46d58849035ac8"} Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.319024 4758 scope.go:117] "RemoveContainer" containerID="f270b22101baa7b3b43a0008d6e88075cab382194aed35d24bec99c8f732021a" Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.317737 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5974d6465c-97ckq" Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.343375 4758 scope.go:117] "RemoveContainer" containerID="70834c4c0f2e5700560407708799402eb1ed5ab917b05c871599639beb6a62b7" Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.354183 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.362901 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5974d6465c-97ckq"] Jan 30 08:57:33 crc kubenswrapper[4758]: I0130 08:57:33.783886 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" path="/var/lib/kubelet/pods/f0bdec68-2725-4515-b6f0-dc421cec4a6d/volumes" Jan 30 08:57:37 crc kubenswrapper[4758]: I0130 08:57:37.769284 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:57:37 crc kubenswrapper[4758]: E0130 08:57:37.771388 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:57:45 crc kubenswrapper[4758]: I0130 08:57:45.425713 4758 generic.go:334] "Generic (PLEG): container finished" podID="c72311ae-5d7e-4978-a690-a9bee0b3672b" containerID="2594c87d64066e1c3d6a70cb5a8bd66af20a2abcdf92c45da3629b70a6baade1" exitCode=0 Jan 30 08:57:45 crc kubenswrapper[4758]: I0130 08:57:45.425801 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c72311ae-5d7e-4978-a690-a9bee0b3672b","Type":"ContainerDied","Data":"2594c87d64066e1c3d6a70cb5a8bd66af20a2abcdf92c45da3629b70a6baade1"} Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.437073 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c72311ae-5d7e-4978-a690-a9bee0b3672b","Type":"ContainerStarted","Data":"64e43eaf45a9a2a28f864e37fff3b5dd18d98e963795ea05944fb9ada14ebbb5"} Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.437601 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.438813 4758 generic.go:334] "Generic (PLEG): container finished" podID="ec0e1aed-0ac5-4482-906f-89c9243729ea" containerID="d1c642bff2555442bfe6d2153832a4b302a1b8e531a2b0373b69b695dfe508bc" exitCode=0 Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.438853 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec0e1aed-0ac5-4482-906f-89c9243729ea","Type":"ContainerDied","Data":"d1c642bff2555442bfe6d2153832a4b302a1b8e531a2b0373b69b695dfe508bc"} Jan 30 08:57:46 crc kubenswrapper[4758]: I0130 08:57:46.523749 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.523705114 podStartE2EDuration="36.523705114s" podCreationTimestamp="2026-01-30 08:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:57:46.497369572 +0000 UTC m=+1671.469681153" watchObservedRunningTime="2026-01-30 08:57:46.523705114 +0000 UTC m=+1671.496016665" Jan 30 08:57:47 crc kubenswrapper[4758]: I0130 08:57:47.449531 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec0e1aed-0ac5-4482-906f-89c9243729ea","Type":"ContainerStarted","Data":"f603760bbfdcc2cb03be1a71e3dff268e341a771ea695ad0b4217bf64941a548"} Jan 30 08:57:47 crc kubenswrapper[4758]: I0130 08:57:47.450116 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:57:47 crc kubenswrapper[4758]: I0130 08:57:47.483584 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.483559137 podStartE2EDuration="35.483559137s" podCreationTimestamp="2026-01-30 08:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 08:57:47.472850122 +0000 UTC m=+1672.445161683" watchObservedRunningTime="2026-01-30 08:57:47.483559137 +0000 UTC m=+1672.455870698" Jan 30 08:57:49 crc kubenswrapper[4758]: I0130 08:57:49.769200 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:57:49 crc kubenswrapper[4758]: E0130 08:57:49.769521 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.373084 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c"] Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374061 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374077 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374098 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374104 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374122 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="extract-utilities" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374147 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="extract-utilities" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374154 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="registry-server" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374161 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="registry-server" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374177 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="init" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374182 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="init" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374196 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="extract-content" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374202 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="extract-content" Jan 30 08:57:56 crc kubenswrapper[4758]: E0130 08:57:56.374226 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="init" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374231 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="init" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374423 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3009f89-995c-4ed2-81e9-3b8e76c970d0" containerName="registry-server" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374447 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bdec68-2725-4515-b6f0-dc421cec4a6d" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.374462 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="846a15fd-f202-4eaf-b346-b66916daa7d1" containerName="dnsmasq-dns" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.375082 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.381819 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.382108 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.382134 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.382301 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.399072 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c"] Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.518411 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.518725 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.518761 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.518826 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.620272 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.620331 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.620370 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.620435 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.626875 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.627960 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.633743 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.636913 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:56 crc kubenswrapper[4758]: I0130 08:57:56.715886 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:57:57 crc kubenswrapper[4758]: I0130 08:57:57.278080 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c"] Jan 30 08:57:57 crc kubenswrapper[4758]: I0130 08:57:57.539629 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" event={"ID":"350797d0-7758-4c0d-84cb-ec160451f377","Type":"ContainerStarted","Data":"0be4b73b6f0e1e461c8cd4108115af90102d388e9f0209704f5e5094f4660b61"} Jan 30 08:58:01 crc kubenswrapper[4758]: I0130 08:58:01.096265 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 08:58:02 crc kubenswrapper[4758]: I0130 08:58:02.476264 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 08:58:04 crc kubenswrapper[4758]: I0130 08:58:04.769722 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:04 crc kubenswrapper[4758]: E0130 08:58:04.773274 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:58:06 crc kubenswrapper[4758]: I0130 08:58:06.039538 4758 scope.go:117] "RemoveContainer" containerID="67149206f58ad0507606cce2e343d6cc655be53f23f559d70aa0ec8a091a7ab5" Jan 30 08:58:07 crc kubenswrapper[4758]: I0130 08:58:07.918928 4758 scope.go:117] "RemoveContainer" containerID="e4c6a196b48061d2cc6a1f8d240ab8cead89ed7d7e814ad3fae5ef4e05e106b8" Jan 30 08:58:08 crc kubenswrapper[4758]: I0130 08:58:08.642767 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" event={"ID":"350797d0-7758-4c0d-84cb-ec160451f377","Type":"ContainerStarted","Data":"eaf998eb2834ee46fee04c4817b18dcf9c68fe885758907b76d71ee69c1a09dc"} Jan 30 08:58:08 crc kubenswrapper[4758]: I0130 08:58:08.667101 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" podStartSLOduration=1.965461689 podStartE2EDuration="12.667082166s" podCreationTimestamp="2026-01-30 08:57:56 +0000 UTC" firstStartedPulling="2026-01-30 08:57:57.292964235 +0000 UTC m=+1682.265275786" lastFinishedPulling="2026-01-30 08:58:07.994584712 +0000 UTC m=+1692.966896263" observedRunningTime="2026-01-30 08:58:08.656584608 +0000 UTC m=+1693.628896179" watchObservedRunningTime="2026-01-30 08:58:08.667082166 +0000 UTC m=+1693.639393717" Jan 30 08:58:15 crc kubenswrapper[4758]: I0130 08:58:15.777346 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:15 crc kubenswrapper[4758]: E0130 08:58:15.778263 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:58:19 crc kubenswrapper[4758]: I0130 08:58:19.740695 4758 generic.go:334] "Generic (PLEG): container finished" podID="350797d0-7758-4c0d-84cb-ec160451f377" containerID="eaf998eb2834ee46fee04c4817b18dcf9c68fe885758907b76d71ee69c1a09dc" exitCode=0 Jan 30 08:58:19 crc kubenswrapper[4758]: I0130 08:58:19.740824 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" event={"ID":"350797d0-7758-4c0d-84cb-ec160451f377","Type":"ContainerDied","Data":"eaf998eb2834ee46fee04c4817b18dcf9c68fe885758907b76d71ee69c1a09dc"} Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.266369 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.348993 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") pod \"350797d0-7758-4c0d-84cb-ec160451f377\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.349468 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") pod \"350797d0-7758-4c0d-84cb-ec160451f377\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.349649 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") pod \"350797d0-7758-4c0d-84cb-ec160451f377\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.350542 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") pod \"350797d0-7758-4c0d-84cb-ec160451f377\" (UID: \"350797d0-7758-4c0d-84cb-ec160451f377\") " Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.357450 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm" (OuterVolumeSpecName: "kube-api-access-s8crm") pod "350797d0-7758-4c0d-84cb-ec160451f377" (UID: "350797d0-7758-4c0d-84cb-ec160451f377"). InnerVolumeSpecName "kube-api-access-s8crm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.358192 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "350797d0-7758-4c0d-84cb-ec160451f377" (UID: "350797d0-7758-4c0d-84cb-ec160451f377"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.378876 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory" (OuterVolumeSpecName: "inventory") pod "350797d0-7758-4c0d-84cb-ec160451f377" (UID: "350797d0-7758-4c0d-84cb-ec160451f377"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.382570 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "350797d0-7758-4c0d-84cb-ec160451f377" (UID: "350797d0-7758-4c0d-84cb-ec160451f377"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.453867 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.454099 4758 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.454166 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8crm\" (UniqueName: \"kubernetes.io/projected/350797d0-7758-4c0d-84cb-ec160451f377-kube-api-access-s8crm\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.454230 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/350797d0-7758-4c0d-84cb-ec160451f377-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.765889 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" event={"ID":"350797d0-7758-4c0d-84cb-ec160451f377","Type":"ContainerDied","Data":"0be4b73b6f0e1e461c8cd4108115af90102d388e9f0209704f5e5094f4660b61"} Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.765952 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be4b73b6f0e1e461c8cd4108115af90102d388e9f0209704f5e5094f4660b61" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.766366 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.874783 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b"] Jan 30 08:58:21 crc kubenswrapper[4758]: E0130 08:58:21.875514 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350797d0-7758-4c0d-84cb-ec160451f377" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.875546 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="350797d0-7758-4c0d-84cb-ec160451f377" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.875762 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="350797d0-7758-4c0d-84cb-ec160451f377" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.876679 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.879557 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.879637 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.879681 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.879561 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.889418 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b"] Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.970169 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.970579 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:21 crc kubenswrapper[4758]: I0130 08:58:21.970688 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.072636 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.072709 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.072744 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.076759 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.080878 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.090909 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rr84b\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.195790 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.734119 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b"] Jan 30 08:58:22 crc kubenswrapper[4758]: I0130 08:58:22.776887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" event={"ID":"a7d607da-5923-4ef5-82ba-083df5db3864","Type":"ContainerStarted","Data":"6029110f7b585297305a3e0da33da93e6ca65c743da75995e51c90909b2039a8"} Jan 30 08:58:23 crc kubenswrapper[4758]: I0130 08:58:23.799990 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" event={"ID":"a7d607da-5923-4ef5-82ba-083df5db3864","Type":"ContainerStarted","Data":"49082d8328ca84ebe4152bfeac7eee5e846d4e7cedf87c007c5c3ea875579b61"} Jan 30 08:58:23 crc kubenswrapper[4758]: I0130 08:58:23.820001 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" podStartSLOduration=2.082644262 podStartE2EDuration="2.819958312s" podCreationTimestamp="2026-01-30 08:58:21 +0000 UTC" firstStartedPulling="2026-01-30 08:58:22.725575095 +0000 UTC m=+1707.697886646" lastFinishedPulling="2026-01-30 08:58:23.462889145 +0000 UTC m=+1708.435200696" observedRunningTime="2026-01-30 08:58:23.819699223 +0000 UTC m=+1708.792010774" watchObservedRunningTime="2026-01-30 08:58:23.819958312 +0000 UTC m=+1708.792269863" Jan 30 08:58:26 crc kubenswrapper[4758]: I0130 08:58:26.827539 4758 generic.go:334] "Generic (PLEG): container finished" podID="a7d607da-5923-4ef5-82ba-083df5db3864" containerID="49082d8328ca84ebe4152bfeac7eee5e846d4e7cedf87c007c5c3ea875579b61" exitCode=0 Jan 30 08:58:26 crc kubenswrapper[4758]: I0130 08:58:26.827626 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" event={"ID":"a7d607da-5923-4ef5-82ba-083df5db3864","Type":"ContainerDied","Data":"49082d8328ca84ebe4152bfeac7eee5e846d4e7cedf87c007c5c3ea875579b61"} Jan 30 08:58:27 crc kubenswrapper[4758]: I0130 08:58:27.769263 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:27 crc kubenswrapper[4758]: E0130 08:58:27.769547 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.324980 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.401168 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") pod \"a7d607da-5923-4ef5-82ba-083df5db3864\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.401214 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") pod \"a7d607da-5923-4ef5-82ba-083df5db3864\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.401267 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") pod \"a7d607da-5923-4ef5-82ba-083df5db3864\" (UID: \"a7d607da-5923-4ef5-82ba-083df5db3864\") " Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.414595 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m" (OuterVolumeSpecName: "kube-api-access-7rb8m") pod "a7d607da-5923-4ef5-82ba-083df5db3864" (UID: "a7d607da-5923-4ef5-82ba-083df5db3864"). InnerVolumeSpecName "kube-api-access-7rb8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.433843 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a7d607da-5923-4ef5-82ba-083df5db3864" (UID: "a7d607da-5923-4ef5-82ba-083df5db3864"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.460638 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory" (OuterVolumeSpecName: "inventory") pod "a7d607da-5923-4ef5-82ba-083df5db3864" (UID: "a7d607da-5923-4ef5-82ba-083df5db3864"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.503149 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.503183 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d607da-5923-4ef5-82ba-083df5db3864-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.503195 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rb8m\" (UniqueName: \"kubernetes.io/projected/a7d607da-5923-4ef5-82ba-083df5db3864-kube-api-access-7rb8m\") on node \"crc\" DevicePath \"\"" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.859056 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" event={"ID":"a7d607da-5923-4ef5-82ba-083df5db3864","Type":"ContainerDied","Data":"6029110f7b585297305a3e0da33da93e6ca65c743da75995e51c90909b2039a8"} Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.859417 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6029110f7b585297305a3e0da33da93e6ca65c743da75995e51c90909b2039a8" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.859121 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rr84b" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.933630 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp"] Jan 30 08:58:28 crc kubenswrapper[4758]: E0130 08:58:28.934291 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d607da-5923-4ef5-82ba-083df5db3864" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.934368 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d607da-5923-4ef5-82ba-083df5db3864" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.934662 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d607da-5923-4ef5-82ba-083df5db3864" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.935472 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.939654 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.939653 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.940013 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.940739 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 08:58:28 crc kubenswrapper[4758]: I0130 08:58:28.944370 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp"] Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.115100 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.116240 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.116599 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.116862 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.217907 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.217968 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.218016 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.218100 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.221992 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.222549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.230080 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.247005 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.258921 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.803103 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp"] Jan 30 08:58:29 crc kubenswrapper[4758]: I0130 08:58:29.868519 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" event={"ID":"64e0d966-7ff9-4dd8-97c0-660cde10793b","Type":"ContainerStarted","Data":"25a2f1bc96743755a476b5e1b766c3ea8d8ac8c33ead2cf689d320248e522ec6"} Jan 30 08:58:30 crc kubenswrapper[4758]: I0130 08:58:30.880852 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" event={"ID":"64e0d966-7ff9-4dd8-97c0-660cde10793b","Type":"ContainerStarted","Data":"b2b90750683b86930522f155931531da5a2fe961e31b6ee7ce56016bc1eeeff0"} Jan 30 08:58:30 crc kubenswrapper[4758]: I0130 08:58:30.904227 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" podStartSLOduration=2.465470447 podStartE2EDuration="2.904208426s" podCreationTimestamp="2026-01-30 08:58:28 +0000 UTC" firstStartedPulling="2026-01-30 08:58:29.822229956 +0000 UTC m=+1714.794541507" lastFinishedPulling="2026-01-30 08:58:30.260967945 +0000 UTC m=+1715.233279486" observedRunningTime="2026-01-30 08:58:30.899790527 +0000 UTC m=+1715.872102078" watchObservedRunningTime="2026-01-30 08:58:30.904208426 +0000 UTC m=+1715.876519977" Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.058090 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.066815 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.076492 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vv7kf"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.097152 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.109699 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.120591 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d642-account-create-update-zwktw"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.128996 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2wfwb"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.136969 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8r78c"] Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.779734 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f506e0d-da4f-4243-923b-f7e102fafd92" path="/var/lib/kubelet/pods/2f506e0d-da4f-4243-923b-f7e102fafd92/volumes" Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.780378 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418d35e8-ad4e-4051-a0f1-fc9179300441" path="/var/lib/kubelet/pods/418d35e8-ad4e-4051-a0f1-fc9179300441/volumes" Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.781796 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585a6b3e-695a-4ddc-91a6-9b39b241ffd0" path="/var/lib/kubelet/pods/585a6b3e-695a-4ddc-91a6-9b39b241ffd0/volumes" Jan 30 08:58:35 crc kubenswrapper[4758]: I0130 08:58:35.783132 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04121a4-3ab4-48c5-903e-8d0002771ff7" path="/var/lib/kubelet/pods/a04121a4-3ab4-48c5-903e-8d0002771ff7/volumes" Jan 30 08:58:36 crc kubenswrapper[4758]: I0130 08:58:36.037418 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:58:36 crc kubenswrapper[4758]: I0130 08:58:36.046303 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:58:36 crc kubenswrapper[4758]: I0130 08:58:36.055696 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-50ef-account-create-update-bskhv"] Jan 30 08:58:36 crc kubenswrapper[4758]: I0130 08:58:36.064989 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-72be-account-create-update-gphrs"] Jan 30 08:58:37 crc kubenswrapper[4758]: I0130 08:58:37.804429 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9c3b23-6678-42de-923a-27ebb5fb61a3" path="/var/lib/kubelet/pods/1e9c3b23-6678-42de-923a-27ebb5fb61a3/volumes" Jan 30 08:58:37 crc kubenswrapper[4758]: I0130 08:58:37.806916 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2121970a-59f7-4943-ac1e-fa675e5eef8e" path="/var/lib/kubelet/pods/2121970a-59f7-4943-ac1e-fa675e5eef8e/volumes" Jan 30 08:58:39 crc kubenswrapper[4758]: I0130 08:58:39.769027 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:39 crc kubenswrapper[4758]: E0130 08:58:39.769499 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:58:43 crc kubenswrapper[4758]: I0130 08:58:43.042245 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:58:43 crc kubenswrapper[4758]: I0130 08:58:43.051299 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5vdm5"] Jan 30 08:58:43 crc kubenswrapper[4758]: I0130 08:58:43.781162 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78535c40-59db-4a0e-bcb9-4bae7e92548c" path="/var/lib/kubelet/pods/78535c40-59db-4a0e-bcb9-4bae7e92548c/volumes" Jan 30 08:58:53 crc kubenswrapper[4758]: I0130 08:58:53.769775 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:58:53 crc kubenswrapper[4758]: E0130 08:58:53.770591 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:04 crc kubenswrapper[4758]: I0130 08:59:04.034300 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:59:04 crc kubenswrapper[4758]: I0130 08:59:04.044853 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cgr7c"] Jan 30 08:59:05 crc kubenswrapper[4758]: I0130 08:59:05.025666 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:59:05 crc kubenswrapper[4758]: I0130 08:59:05.033777 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2rzkq"] Jan 30 08:59:05 crc kubenswrapper[4758]: I0130 08:59:05.784970 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b87c6fc-6491-419a-96bb-9542edb1e8aa" path="/var/lib/kubelet/pods/6b87c6fc-6491-419a-96bb-9542edb1e8aa/volumes" Jan 30 08:59:05 crc kubenswrapper[4758]: I0130 08:59:05.786731 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30" path="/var/lib/kubelet/pods/dbfd31c3-1443-4c37-8fa2-6c1f3e1e0d30/volumes" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.118292 4758 scope.go:117] "RemoveContainer" containerID="79153cef375b67170413da87462ce99cc5566739d6bdf2f6c1513a6b0ebe4db9" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.161214 4758 scope.go:117] "RemoveContainer" containerID="a6e4fb8d0b259571728fd753067c06a5a86fb0519ae153af2e5b3b44c3d06d59" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.224236 4758 scope.go:117] "RemoveContainer" containerID="26c6c01e8de6266c91045a1615b456f02f5eaff630949bebc138b39fd169f580" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.266506 4758 scope.go:117] "RemoveContainer" containerID="1b0811ae6d53b97ab814b4974446d3583f0018b01ea502fb69edd27ab1156add" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.333887 4758 scope.go:117] "RemoveContainer" containerID="6c4d4c33fc49af87a76aa96b41f58bb62854886d94fd3aadaddede31f4e5c0ed" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.359837 4758 scope.go:117] "RemoveContainer" containerID="1d16e0f46d09a180bfaf46bb185c0c0c5a4a301473761aa6a905b4411c7a3f20" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.405813 4758 scope.go:117] "RemoveContainer" containerID="defa08629f2c819116b155489e34ad3fab9a107ee7bcfc7941be06048d203e56" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.425258 4758 scope.go:117] "RemoveContainer" containerID="daf0e92dedcf6f1f855f17b37c2a9bda8d265d7523eda70616bc5f00569e869a" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.454181 4758 scope.go:117] "RemoveContainer" containerID="aabb12ec35a8ae64e19c55b39db2dfd8844c3fe734c3d7e0940dfeeffb34c9d1" Jan 30 08:59:08 crc kubenswrapper[4758]: I0130 08:59:08.769461 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:08 crc kubenswrapper[4758]: E0130 08:59:08.769833 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.063792 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.079383 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.087275 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.096085 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bstt2"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.104012 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-feed-account-create-update-qcmp4"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.112637 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.120965 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-495a-account-create-update-k2qtr"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.128601 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5131-account-create-update-tbcx5"] Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.779160 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58afb54a-7b57-42bc-af7c-13db0bfd1580" path="/var/lib/kubelet/pods/58afb54a-7b57-42bc-af7c-13db0bfd1580/volumes" Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.781970 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a01921f-9f43-471a-bbd3-0a7e9bab364e" path="/var/lib/kubelet/pods/8a01921f-9f43-471a-bbd3-0a7e9bab364e/volumes" Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.783232 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85c22cc-99a7-4939-904c-bffa8f2d5457" path="/var/lib/kubelet/pods/c85c22cc-99a7-4939-904c-bffa8f2d5457/volumes" Jan 30 08:59:09 crc kubenswrapper[4758]: I0130 08:59:09.785683 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4bedf0c-06c1-4eaa-b731-3b8c2438456d" path="/var/lib/kubelet/pods/f4bedf0c-06c1-4eaa-b731-3b8c2438456d/volumes" Jan 30 08:59:12 crc kubenswrapper[4758]: I0130 08:59:12.032193 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:59:12 crc kubenswrapper[4758]: I0130 08:59:12.040932 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kc8cg"] Jan 30 08:59:13 crc kubenswrapper[4758]: I0130 08:59:13.782020 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e127b5-11e2-40a6-8389-a9d08b8cae4f" path="/var/lib/kubelet/pods/d2e127b5-11e2-40a6-8389-a9d08b8cae4f/volumes" Jan 30 08:59:15 crc kubenswrapper[4758]: I0130 08:59:15.041137 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:59:15 crc kubenswrapper[4758]: I0130 08:59:15.052329 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6cqnq"] Jan 30 08:59:15 crc kubenswrapper[4758]: I0130 08:59:15.783317 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5665f568-1d3e-48dc-8a32-7bd9ad02a037" path="/var/lib/kubelet/pods/5665f568-1d3e-48dc-8a32-7bd9ad02a037/volumes" Jan 30 08:59:22 crc kubenswrapper[4758]: I0130 08:59:22.769417 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:22 crc kubenswrapper[4758]: E0130 08:59:22.770716 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:32 crc kubenswrapper[4758]: I0130 08:59:32.820275 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:32 crc kubenswrapper[4758]: E0130 08:59:32.821757 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:44 crc kubenswrapper[4758]: I0130 08:59:44.769956 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:44 crc kubenswrapper[4758]: E0130 08:59:44.771425 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 08:59:51 crc kubenswrapper[4758]: I0130 08:59:51.041482 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:59:51 crc kubenswrapper[4758]: I0130 08:59:51.048201 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-z5hrj"] Jan 30 08:59:51 crc kubenswrapper[4758]: I0130 08:59:51.781318 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b166e095-ba6b-443f-8c0a-0e83bb698ccd" path="/var/lib/kubelet/pods/b166e095-ba6b-443f-8c0a-0e83bb698ccd/volumes" Jan 30 08:59:55 crc kubenswrapper[4758]: I0130 08:59:55.775271 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 08:59:55 crc kubenswrapper[4758]: E0130 08:59:55.776951 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.151828 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.153823 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.159113 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.159676 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.170761 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.180163 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.180271 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.180359 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.282586 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.282678 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.282717 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.283899 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.296295 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.299952 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") pod \"collect-profiles-29496060-f2nqx\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.476207 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:00 crc kubenswrapper[4758]: I0130 09:00:00.906637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:00:01 crc kubenswrapper[4758]: I0130 09:00:01.746666 4758 generic.go:334] "Generic (PLEG): container finished" podID="e8b22039-92b3-4488-a971-2913dedb64ed" containerID="04e032ddde7c1d4a35b13c1c04e206e7ca345c709ebf2a8892f46a64701c926c" exitCode=0 Jan 30 09:00:01 crc kubenswrapper[4758]: I0130 09:00:01.746868 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" event={"ID":"e8b22039-92b3-4488-a971-2913dedb64ed","Type":"ContainerDied","Data":"04e032ddde7c1d4a35b13c1c04e206e7ca345c709ebf2a8892f46a64701c926c"} Jan 30 09:00:01 crc kubenswrapper[4758]: I0130 09:00:01.747243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" event={"ID":"e8b22039-92b3-4488-a971-2913dedb64ed","Type":"ContainerStarted","Data":"b182bf0b41a1913720a2b1bb4645bfa412a564e1599ae5bd1baa8a674af75d72"} Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.149973 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.253014 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") pod \"e8b22039-92b3-4488-a971-2913dedb64ed\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.253257 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") pod \"e8b22039-92b3-4488-a971-2913dedb64ed\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.253304 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") pod \"e8b22039-92b3-4488-a971-2913dedb64ed\" (UID: \"e8b22039-92b3-4488-a971-2913dedb64ed\") " Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.254361 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8b22039-92b3-4488-a971-2913dedb64ed" (UID: "e8b22039-92b3-4488-a971-2913dedb64ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.260271 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh" (OuterVolumeSpecName: "kube-api-access-ptrfh") pod "e8b22039-92b3-4488-a971-2913dedb64ed" (UID: "e8b22039-92b3-4488-a971-2913dedb64ed"). InnerVolumeSpecName "kube-api-access-ptrfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.261151 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8b22039-92b3-4488-a971-2913dedb64ed" (UID: "e8b22039-92b3-4488-a971-2913dedb64ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.356082 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8b22039-92b3-4488-a971-2913dedb64ed-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.356124 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptrfh\" (UniqueName: \"kubernetes.io/projected/e8b22039-92b3-4488-a971-2913dedb64ed-kube-api-access-ptrfh\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.356134 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8b22039-92b3-4488-a971-2913dedb64ed-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.766438 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" event={"ID":"e8b22039-92b3-4488-a971-2913dedb64ed","Type":"ContainerDied","Data":"b182bf0b41a1913720a2b1bb4645bfa412a564e1599ae5bd1baa8a674af75d72"} Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.766515 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b182bf0b41a1913720a2b1bb4645bfa412a564e1599ae5bd1baa8a674af75d72" Jan 30 09:00:03 crc kubenswrapper[4758]: I0130 09:00:03.766515 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx" Jan 30 09:00:04 crc kubenswrapper[4758]: I0130 09:00:04.042567 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 09:00:04 crc kubenswrapper[4758]: I0130 09:00:04.051505 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lqbmm"] Jan 30 09:00:05 crc kubenswrapper[4758]: I0130 09:00:05.779139 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f7c236-867a-465b-9514-de6a765b312b" path="/var/lib/kubelet/pods/11f7c236-867a-465b-9514-de6a765b312b/volumes" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.608458 4758 scope.go:117] "RemoveContainer" containerID="a887d904bf88f7531d24fd0632b3980599f13f39af4a57d591d1cab59676a5bb" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.646547 4758 scope.go:117] "RemoveContainer" containerID="de4f25edc76809fd476b4ddf2f6fed1b04afc6e58967e48dd869ed6a37fbb265" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.694467 4758 scope.go:117] "RemoveContainer" containerID="23862bcdc0458af24e0606a4eedaf48106778403061437cff21803ebeee27a94" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.727470 4758 scope.go:117] "RemoveContainer" containerID="1513688fede19d670ec6826188636f5ccd6bae62325cf06859976dc36b250613" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.768867 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 09:00:08 crc kubenswrapper[4758]: E0130 09:00:08.769280 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.784917 4758 scope.go:117] "RemoveContainer" containerID="b2ff6cf87c18064183ff2ca83818d1799a9ee6c49eb71a91d7e28d365a9e731e" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.814896 4758 scope.go:117] "RemoveContainer" containerID="2906f8c6c80f27b1b5d6a346bfa1c2bccd0d8111cf2311c1ea9793ee001e6172" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.873524 4758 scope.go:117] "RemoveContainer" containerID="d070fa2fc47120d90dd29347010dd2bd317f21d5673f4354507acac0096009be" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.893809 4758 scope.go:117] "RemoveContainer" containerID="edfcbb4ee1b893b2f19bb3b08ec37186fb229027e460be6e17c73cf2a49baec0" Jan 30 09:00:08 crc kubenswrapper[4758]: I0130 09:00:08.915241 4758 scope.go:117] "RemoveContainer" containerID="0ae912bbbb171da1791fe8eb0cee80e9a7d55f62417065d9e913877b45490451" Jan 30 09:00:12 crc kubenswrapper[4758]: I0130 09:00:12.033378 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 09:00:12 crc kubenswrapper[4758]: I0130 09:00:12.042944 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-slg8b"] Jan 30 09:00:13 crc kubenswrapper[4758]: I0130 09:00:13.780108 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="419e16c4-297d-490a-8fd3-6d365e20f5f2" path="/var/lib/kubelet/pods/419e16c4-297d-490a-8fd3-6d365e20f5f2/volumes" Jan 30 09:00:22 crc kubenswrapper[4758]: I0130 09:00:22.769012 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 09:00:23 crc kubenswrapper[4758]: I0130 09:00:23.954298 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0"} Jan 30 09:00:24 crc kubenswrapper[4758]: I0130 09:00:24.064337 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 09:00:24 crc kubenswrapper[4758]: I0130 09:00:24.076004 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-c24lg"] Jan 30 09:00:25 crc kubenswrapper[4758]: I0130 09:00:25.061127 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 09:00:25 crc kubenswrapper[4758]: I0130 09:00:25.069530 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-x2v6d"] Jan 30 09:00:25 crc kubenswrapper[4758]: I0130 09:00:25.779624 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ff21aa-edae-4f56-a2ea-be0deb2d84d7" path="/var/lib/kubelet/pods/12ff21aa-edae-4f56-a2ea-be0deb2d84d7/volumes" Jan 30 09:00:25 crc kubenswrapper[4758]: I0130 09:00:25.781484 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3" path="/var/lib/kubelet/pods/25bb36fc-cf52-48c0-8321-7dcfaf5f7cf3/volumes" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.158100 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496061-prvzt"] Jan 30 09:01:00 crc kubenswrapper[4758]: E0130 09:01:00.158971 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b22039-92b3-4488-a971-2913dedb64ed" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.158984 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b22039-92b3-4488-a971-2913dedb64ed" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.159242 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b22039-92b3-4488-a971-2913dedb64ed" containerName="collect-profiles" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.159832 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.215355 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496061-prvzt"] Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.317846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.318194 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.318455 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.318675 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.421114 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.421224 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.421267 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.421315 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.427864 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.431205 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.431984 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.442836 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") pod \"keystone-cron-29496061-prvzt\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.482741 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:00 crc kubenswrapper[4758]: I0130 09:01:00.952845 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496061-prvzt"] Jan 30 09:01:01 crc kubenswrapper[4758]: I0130 09:01:01.720898 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-prvzt" event={"ID":"c5ff7966-87c8-4b9b-8520-a05c3b5d252d","Type":"ContainerStarted","Data":"31c39afda45cb5bab6823cada480f84df8f6a4c26df945f64901b847cbbf0edb"} Jan 30 09:01:01 crc kubenswrapper[4758]: I0130 09:01:01.721243 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-prvzt" event={"ID":"c5ff7966-87c8-4b9b-8520-a05c3b5d252d","Type":"ContainerStarted","Data":"54ee80781f361f9610a63542584fb45e2211622231ba8ebf996592a73513e867"} Jan 30 09:01:02 crc kubenswrapper[4758]: I0130 09:01:02.754497 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496061-prvzt" podStartSLOduration=2.754476606 podStartE2EDuration="2.754476606s" podCreationTimestamp="2026-01-30 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:01:02.748638094 +0000 UTC m=+1867.720949655" watchObservedRunningTime="2026-01-30 09:01:02.754476606 +0000 UTC m=+1867.726788157" Jan 30 09:01:06 crc kubenswrapper[4758]: I0130 09:01:06.764346 4758 generic.go:334] "Generic (PLEG): container finished" podID="c5ff7966-87c8-4b9b-8520-a05c3b5d252d" containerID="31c39afda45cb5bab6823cada480f84df8f6a4c26df945f64901b847cbbf0edb" exitCode=0 Jan 30 09:01:06 crc kubenswrapper[4758]: I0130 09:01:06.764503 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-prvzt" event={"ID":"c5ff7966-87c8-4b9b-8520-a05c3b5d252d","Type":"ContainerDied","Data":"31c39afda45cb5bab6823cada480f84df8f6a4c26df945f64901b847cbbf0edb"} Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.139537 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.171482 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") pod \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.171700 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") pod \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.171771 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") pod \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.171802 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") pod \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\" (UID: \"c5ff7966-87c8-4b9b-8520-a05c3b5d252d\") " Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.189314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd" (OuterVolumeSpecName: "kube-api-access-d8jwd") pod "c5ff7966-87c8-4b9b-8520-a05c3b5d252d" (UID: "c5ff7966-87c8-4b9b-8520-a05c3b5d252d"). InnerVolumeSpecName "kube-api-access-d8jwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.189321 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c5ff7966-87c8-4b9b-8520-a05c3b5d252d" (UID: "c5ff7966-87c8-4b9b-8520-a05c3b5d252d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.212695 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5ff7966-87c8-4b9b-8520-a05c3b5d252d" (UID: "c5ff7966-87c8-4b9b-8520-a05c3b5d252d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.252285 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data" (OuterVolumeSpecName: "config-data") pod "c5ff7966-87c8-4b9b-8520-a05c3b5d252d" (UID: "c5ff7966-87c8-4b9b-8520-a05c3b5d252d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.284332 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.284370 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.284386 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8jwd\" (UniqueName: \"kubernetes.io/projected/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-kube-api-access-d8jwd\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.284402 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ff7966-87c8-4b9b-8520-a05c3b5d252d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.783366 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496061-prvzt" event={"ID":"c5ff7966-87c8-4b9b-8520-a05c3b5d252d","Type":"ContainerDied","Data":"54ee80781f361f9610a63542584fb45e2211622231ba8ebf996592a73513e867"} Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.783410 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ee80781f361f9610a63542584fb45e2211622231ba8ebf996592a73513e867" Jan 30 09:01:08 crc kubenswrapper[4758]: I0130 09:01:08.783416 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496061-prvzt" Jan 30 09:01:09 crc kubenswrapper[4758]: I0130 09:01:09.085665 4758 scope.go:117] "RemoveContainer" containerID="c783ba41ba97d546267a2a7d551d57bf2d634c9498d0ec5c9365e656091583d9" Jan 30 09:01:09 crc kubenswrapper[4758]: I0130 09:01:09.144811 4758 scope.go:117] "RemoveContainer" containerID="c95e7220fa0d1725e338904a7e29c2f7e1c50ca5a270bfbf0b0819abddbe5c04" Jan 30 09:01:09 crc kubenswrapper[4758]: I0130 09:01:09.226786 4758 scope.go:117] "RemoveContainer" containerID="ce042854a2f3fd058ffc8182aa4289802d19b49b10f8800c0daceab9556857f3" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.113453 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.134221 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.144895 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.155215 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.164559 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7afc-account-create-update-4gfp5"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.177917 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.188915 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.200497 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-649b-account-create-update-fjrbb"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.211810 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7kv6r"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.255338 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4e24-account-create-update-pkqqp"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.262552 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-psn47"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.269964 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8mvw7"] Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.783308 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d2dbd4c-2c5d-4865-8cf2-ce663e060369" path="/var/lib/kubelet/pods/1d2dbd4c-2c5d-4865-8cf2-ce663e060369/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.784590 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227bbca8-b963-4b39-af28-ac9dbf50bc73" path="/var/lib/kubelet/pods/227bbca8-b963-4b39-af28-ac9dbf50bc73/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.786396 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7e2817-2850-4d53-8ebf-4977eea68664" path="/var/lib/kubelet/pods/6a7e2817-2850-4d53-8ebf-4977eea68664/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.787600 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad" path="/var/lib/kubelet/pods/a91cfa2d-f4f9-43ef-bf6a-3aab2d67c6ad/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.789433 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7372d3-e1b3-4621-a40b-9b09fd3d7a3b" path="/var/lib/kubelet/pods/be7372d3-e1b3-4621-a40b-9b09fd3d7a3b/volumes" Jan 30 09:01:21 crc kubenswrapper[4758]: I0130 09:01:21.790382 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8" path="/var/lib/kubelet/pods/fff40ca2-ce20-4c6a-82c7-aa2c5b744ac8/volumes" Jan 30 09:01:41 crc kubenswrapper[4758]: I0130 09:01:41.067827 4758 generic.go:334] "Generic (PLEG): container finished" podID="64e0d966-7ff9-4dd8-97c0-660cde10793b" containerID="b2b90750683b86930522f155931531da5a2fe961e31b6ee7ce56016bc1eeeff0" exitCode=0 Jan 30 09:01:41 crc kubenswrapper[4758]: I0130 09:01:41.067952 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" event={"ID":"64e0d966-7ff9-4dd8-97c0-660cde10793b","Type":"ContainerDied","Data":"b2b90750683b86930522f155931531da5a2fe961e31b6ee7ce56016bc1eeeff0"} Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.581446 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.743067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") pod \"64e0d966-7ff9-4dd8-97c0-660cde10793b\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.743793 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") pod \"64e0d966-7ff9-4dd8-97c0-660cde10793b\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.743959 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") pod \"64e0d966-7ff9-4dd8-97c0-660cde10793b\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.744214 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") pod \"64e0d966-7ff9-4dd8-97c0-660cde10793b\" (UID: \"64e0d966-7ff9-4dd8-97c0-660cde10793b\") " Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.750261 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt" (OuterVolumeSpecName: "kube-api-access-vvktt") pod "64e0d966-7ff9-4dd8-97c0-660cde10793b" (UID: "64e0d966-7ff9-4dd8-97c0-660cde10793b"). InnerVolumeSpecName "kube-api-access-vvktt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.752261 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "64e0d966-7ff9-4dd8-97c0-660cde10793b" (UID: "64e0d966-7ff9-4dd8-97c0-660cde10793b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.775934 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64e0d966-7ff9-4dd8-97c0-660cde10793b" (UID: "64e0d966-7ff9-4dd8-97c0-660cde10793b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.779686 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory" (OuterVolumeSpecName: "inventory") pod "64e0d966-7ff9-4dd8-97c0-660cde10793b" (UID: "64e0d966-7ff9-4dd8-97c0-660cde10793b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.846397 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.846436 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.846449 4758 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64e0d966-7ff9-4dd8-97c0-660cde10793b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:42 crc kubenswrapper[4758]: I0130 09:01:42.846458 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvktt\" (UniqueName: \"kubernetes.io/projected/64e0d966-7ff9-4dd8-97c0-660cde10793b-kube-api-access-vvktt\") on node \"crc\" DevicePath \"\"" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.092247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" event={"ID":"64e0d966-7ff9-4dd8-97c0-660cde10793b","Type":"ContainerDied","Data":"25a2f1bc96743755a476b5e1b766c3ea8d8ac8c33ead2cf689d320248e522ec6"} Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.092319 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a2f1bc96743755a476b5e1b766c3ea8d8ac8c33ead2cf689d320248e522ec6" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.092332 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.203396 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w"] Jan 30 09:01:43 crc kubenswrapper[4758]: E0130 09:01:43.204182 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ff7966-87c8-4b9b-8520-a05c3b5d252d" containerName="keystone-cron" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.204206 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ff7966-87c8-4b9b-8520-a05c3b5d252d" containerName="keystone-cron" Jan 30 09:01:43 crc kubenswrapper[4758]: E0130 09:01:43.204247 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e0d966-7ff9-4dd8-97c0-660cde10793b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.204259 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e0d966-7ff9-4dd8-97c0-660cde10793b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.204635 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ff7966-87c8-4b9b-8520-a05c3b5d252d" containerName="keystone-cron" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.204664 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e0d966-7ff9-4dd8-97c0-660cde10793b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.205491 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.211232 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.211500 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.211756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.212536 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.224875 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w"] Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.357995 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.358080 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.358134 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.465508 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.465648 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.465798 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.471789 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.479548 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.488572 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:43 crc kubenswrapper[4758]: I0130 09:01:43.525487 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:01:44 crc kubenswrapper[4758]: I0130 09:01:44.029703 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w"] Jan 30 09:01:44 crc kubenswrapper[4758]: I0130 09:01:44.104195 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" event={"ID":"103d82ed-724d-4545-9b1c-04633d68c1ef","Type":"ContainerStarted","Data":"dc8c698a7e3cc8b7b9e80c909e94c993ae488290acead9874c7b8920f841aaf0"} Jan 30 09:01:45 crc kubenswrapper[4758]: I0130 09:01:45.114355 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" event={"ID":"103d82ed-724d-4545-9b1c-04633d68c1ef","Type":"ContainerStarted","Data":"84be8ae0cf87735785390fb892a6fee69f14471016eb2f92e7cf490645bad8b1"} Jan 30 09:01:45 crc kubenswrapper[4758]: I0130 09:01:45.140718 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" podStartSLOduration=1.6518071060000001 podStartE2EDuration="2.1406631s" podCreationTimestamp="2026-01-30 09:01:43 +0000 UTC" firstStartedPulling="2026-01-30 09:01:44.037974403 +0000 UTC m=+1909.010285954" lastFinishedPulling="2026-01-30 09:01:44.526830397 +0000 UTC m=+1909.499141948" observedRunningTime="2026-01-30 09:01:45.129063263 +0000 UTC m=+1910.101374824" watchObservedRunningTime="2026-01-30 09:01:45.1406631 +0000 UTC m=+1910.112974661" Jan 30 09:02:05 crc kubenswrapper[4758]: I0130 09:02:05.042166 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 09:02:05 crc kubenswrapper[4758]: I0130 09:02:05.054121 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-chr6l"] Jan 30 09:02:05 crc kubenswrapper[4758]: I0130 09:02:05.780552 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a33948-1e21-49dd-9f48-b4c188ae6e9d" path="/var/lib/kubelet/pods/06a33948-1e21-49dd-9f48-b4c188ae6e9d/volumes" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.341098 4758 scope.go:117] "RemoveContainer" containerID="638aaa1aba025b2a9d201ffacd37903c3bef07833c13c4c8b9d80743c4260d8f" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.398634 4758 scope.go:117] "RemoveContainer" containerID="6b5f2a511f68dead3ed1b1b92615f5c6856551f3b3f434c8b9484d4f99758a12" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.434335 4758 scope.go:117] "RemoveContainer" containerID="67a4a1af734c4adf4de193490eeca87888430ce68cfa2191062d7093ddc838ba" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.475723 4758 scope.go:117] "RemoveContainer" containerID="4f8803add769de9f15a23dbfbf03688310d1c6b3d5c93d79adba46cb16dec5ca" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.513048 4758 scope.go:117] "RemoveContainer" containerID="2a553d18e8c8709e9420435d0699cca30262f29d812796ba71700ab0845f9d1c" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.557413 4758 scope.go:117] "RemoveContainer" containerID="54d62ade615f4d66aac7ea9231240e05dbce2b51363637c91db0cd5147899e3f" Jan 30 09:02:09 crc kubenswrapper[4758]: I0130 09:02:09.605307 4758 scope.go:117] "RemoveContainer" containerID="e022a3d99ce24d9dd10ef619189c15979640758b052ecd93de5065aa03c1b11a" Jan 30 09:02:22 crc kubenswrapper[4758]: I0130 09:02:22.388001 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:02:22 crc kubenswrapper[4758]: I0130 09:02:22.388557 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:02:52 crc kubenswrapper[4758]: I0130 09:02:52.387732 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:02:52 crc kubenswrapper[4758]: I0130 09:02:52.388468 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:03:13 crc kubenswrapper[4758]: I0130 09:03:13.039891 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 09:03:13 crc kubenswrapper[4758]: I0130 09:03:13.050426 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xp8d6"] Jan 30 09:03:13 crc kubenswrapper[4758]: I0130 09:03:13.778591 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c7907da-98f4-46c2-9089-7227516cf739" path="/var/lib/kubelet/pods/9c7907da-98f4-46c2-9089-7227516cf739/volumes" Jan 30 09:03:20 crc kubenswrapper[4758]: I0130 09:03:20.048680 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 09:03:20 crc kubenswrapper[4758]: I0130 09:03:20.058224 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-95vnq"] Jan 30 09:03:21 crc kubenswrapper[4758]: I0130 09:03:21.780180 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42581858-ead1-4898-9fad-72411bf3c6a4" path="/var/lib/kubelet/pods/42581858-ead1-4898-9fad-72411bf3c6a4/volumes" Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.387027 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.387368 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.387413 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.388085 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.388146 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0" gracePeriod=600 Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.961842 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0" exitCode=0 Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.961899 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0"} Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.962370 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41"} Jan 30 09:03:22 crc kubenswrapper[4758]: I0130 09:03:22.962404 4758 scope.go:117] "RemoveContainer" containerID="267bbfd6dcd2c65ee4fcc5dc9ed51959e853453fdf3e0f41cbfb5801ae6ee275" Jan 30 09:03:24 crc kubenswrapper[4758]: I0130 09:03:24.980938 4758 generic.go:334] "Generic (PLEG): container finished" podID="103d82ed-724d-4545-9b1c-04633d68c1ef" containerID="84be8ae0cf87735785390fb892a6fee69f14471016eb2f92e7cf490645bad8b1" exitCode=0 Jan 30 09:03:24 crc kubenswrapper[4758]: I0130 09:03:24.981063 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" event={"ID":"103d82ed-724d-4545-9b1c-04633d68c1ef","Type":"ContainerDied","Data":"84be8ae0cf87735785390fb892a6fee69f14471016eb2f92e7cf490645bad8b1"} Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.449228 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.587000 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") pod \"103d82ed-724d-4545-9b1c-04633d68c1ef\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.587240 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") pod \"103d82ed-724d-4545-9b1c-04633d68c1ef\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.587354 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") pod \"103d82ed-724d-4545-9b1c-04633d68c1ef\" (UID: \"103d82ed-724d-4545-9b1c-04633d68c1ef\") " Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.594224 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns" (OuterVolumeSpecName: "kube-api-access-v79ns") pod "103d82ed-724d-4545-9b1c-04633d68c1ef" (UID: "103d82ed-724d-4545-9b1c-04633d68c1ef"). InnerVolumeSpecName "kube-api-access-v79ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.621458 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory" (OuterVolumeSpecName: "inventory") pod "103d82ed-724d-4545-9b1c-04633d68c1ef" (UID: "103d82ed-724d-4545-9b1c-04633d68c1ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.622292 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "103d82ed-724d-4545-9b1c-04633d68c1ef" (UID: "103d82ed-724d-4545-9b1c-04633d68c1ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.689994 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.690092 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v79ns\" (UniqueName: \"kubernetes.io/projected/103d82ed-724d-4545-9b1c-04633d68c1ef-kube-api-access-v79ns\") on node \"crc\" DevicePath \"\"" Jan 30 09:03:26 crc kubenswrapper[4758]: I0130 09:03:26.690108 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/103d82ed-724d-4545-9b1c-04633d68c1ef-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.001764 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" event={"ID":"103d82ed-724d-4545-9b1c-04633d68c1ef","Type":"ContainerDied","Data":"dc8c698a7e3cc8b7b9e80c909e94c993ae488290acead9874c7b8920f841aaf0"} Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.002141 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc8c698a7e3cc8b7b9e80c909e94c993ae488290acead9874c7b8920f841aaf0" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.001808 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.150843 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn"] Jan 30 09:03:27 crc kubenswrapper[4758]: E0130 09:03:27.151280 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d82ed-724d-4545-9b1c-04633d68c1ef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.151304 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d82ed-724d-4545-9b1c-04633d68c1ef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.151561 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="103d82ed-724d-4545-9b1c-04633d68c1ef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.152361 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.154740 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.155176 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.155376 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.155488 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.174648 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn"] Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.302343 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.302690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.302762 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.404058 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.404637 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.404920 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.416199 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.417486 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.424701 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.472976 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.975208 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn"] Jan 30 09:03:27 crc kubenswrapper[4758]: I0130 09:03:27.980680 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:03:28 crc kubenswrapper[4758]: I0130 09:03:28.011419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" event={"ID":"a282c0aa-8c3d-4a78-9fd6-1971701a1158","Type":"ContainerStarted","Data":"22d6485e6d1373e9c1fd9189afdc079b81c1850b7190e78e117073ff593293a2"} Jan 30 09:03:29 crc kubenswrapper[4758]: I0130 09:03:29.026821 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" event={"ID":"a282c0aa-8c3d-4a78-9fd6-1971701a1158","Type":"ContainerStarted","Data":"2c32173feef6cb9a2c38285cc27663ba19215a548a7eac139d2cd7b311a40fd7"} Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.045014 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" podStartSLOduration=31.417722064 podStartE2EDuration="32.044986043s" podCreationTimestamp="2026-01-30 09:03:27 +0000 UTC" firstStartedPulling="2026-01-30 09:03:27.980405998 +0000 UTC m=+2012.952717549" lastFinishedPulling="2026-01-30 09:03:28.607669977 +0000 UTC m=+2013.579981528" observedRunningTime="2026-01-30 09:03:29.05353199 +0000 UTC m=+2014.025843611" watchObservedRunningTime="2026-01-30 09:03:59.044986043 +0000 UTC m=+2044.017297604" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.049344 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.057926 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-pmsnh"] Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.632110 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.634000 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.649098 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.763108 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.763253 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.763298 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.797836 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaef08b6-5771-4e63-90e2-f3eb803993ad" path="/var/lib/kubelet/pods/aaef08b6-5771-4e63-90e2-f3eb803993ad/volumes" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.865087 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.866367 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.866707 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.865929 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.867017 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.891320 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") pod \"redhat-operators-cjxmq\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:03:59 crc kubenswrapper[4758]: I0130 09:03:59.955485 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:00 crc kubenswrapper[4758]: I0130 09:04:00.444803 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:04:01 crc kubenswrapper[4758]: I0130 09:04:01.313295 4758 generic.go:334] "Generic (PLEG): container finished" podID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerID="e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598" exitCode=0 Jan 30 09:04:01 crc kubenswrapper[4758]: I0130 09:04:01.313318 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerDied","Data":"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598"} Jan 30 09:04:01 crc kubenswrapper[4758]: I0130 09:04:01.313774 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerStarted","Data":"604f2f2ff8b464eaacc934db9131008fb807f8f7723bad18c38405e8b17d4ba9"} Jan 30 09:04:03 crc kubenswrapper[4758]: I0130 09:04:03.331213 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerStarted","Data":"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0"} Jan 30 09:04:08 crc kubenswrapper[4758]: I0130 09:04:08.380891 4758 generic.go:334] "Generic (PLEG): container finished" podID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerID="ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0" exitCode=0 Jan 30 09:04:08 crc kubenswrapper[4758]: I0130 09:04:08.380973 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerDied","Data":"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0"} Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.393999 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerStarted","Data":"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6"} Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.792953 4758 scope.go:117] "RemoveContainer" containerID="83928b37f6975882ba286ca1fad93e1bc51d5667bdaabba8b964bdb9f2dfdcd4" Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.834903 4758 scope.go:117] "RemoveContainer" containerID="fe37a0352772fb36e158de913042f5d07970aede7274a69f794e11be3129cf1b" Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.898220 4758 scope.go:117] "RemoveContainer" containerID="87c0147985c0330380f0c62d6ffa802a17a1d7f52af945bd4b80ccd53693d21e" Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.956931 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:09 crc kubenswrapper[4758]: I0130 09:04:09.956985 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:11 crc kubenswrapper[4758]: I0130 09:04:11.013575 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjxmq" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" probeResult="failure" output=< Jan 30 09:04:11 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:04:11 crc kubenswrapper[4758]: > Jan 30 09:04:20 crc kubenswrapper[4758]: I0130 09:04:20.004714 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:20 crc kubenswrapper[4758]: I0130 09:04:20.028387 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cjxmq" podStartSLOduration=13.522734383 podStartE2EDuration="21.028361516s" podCreationTimestamp="2026-01-30 09:03:59 +0000 UTC" firstStartedPulling="2026-01-30 09:04:01.31580972 +0000 UTC m=+2046.288121271" lastFinishedPulling="2026-01-30 09:04:08.821436853 +0000 UTC m=+2053.793748404" observedRunningTime="2026-01-30 09:04:09.419782586 +0000 UTC m=+2054.392094147" watchObservedRunningTime="2026-01-30 09:04:20.028361516 +0000 UTC m=+2065.000673067" Jan 30 09:04:20 crc kubenswrapper[4758]: I0130 09:04:20.057549 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:20 crc kubenswrapper[4758]: I0130 09:04:20.247186 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:04:21 crc kubenswrapper[4758]: I0130 09:04:21.491993 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cjxmq" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" containerID="cri-o://ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" gracePeriod=2 Jan 30 09:04:21 crc kubenswrapper[4758]: I0130 09:04:21.991022 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.013613 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") pod \"0cd223a4-37c2-4e55-9516-bc48c435b50c\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.013709 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") pod \"0cd223a4-37c2-4e55-9516-bc48c435b50c\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.013873 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") pod \"0cd223a4-37c2-4e55-9516-bc48c435b50c\" (UID: \"0cd223a4-37c2-4e55-9516-bc48c435b50c\") " Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.016176 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities" (OuterVolumeSpecName: "utilities") pod "0cd223a4-37c2-4e55-9516-bc48c435b50c" (UID: "0cd223a4-37c2-4e55-9516-bc48c435b50c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.035320 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd" (OuterVolumeSpecName: "kube-api-access-nc5gd") pod "0cd223a4-37c2-4e55-9516-bc48c435b50c" (UID: "0cd223a4-37c2-4e55-9516-bc48c435b50c"). InnerVolumeSpecName "kube-api-access-nc5gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.117241 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc5gd\" (UniqueName: \"kubernetes.io/projected/0cd223a4-37c2-4e55-9516-bc48c435b50c-kube-api-access-nc5gd\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.117288 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.183293 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cd223a4-37c2-4e55-9516-bc48c435b50c" (UID: "0cd223a4-37c2-4e55-9516-bc48c435b50c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.218711 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd223a4-37c2-4e55-9516-bc48c435b50c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.501855 4758 generic.go:334] "Generic (PLEG): container finished" podID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerID="ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" exitCode=0 Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.502114 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerDied","Data":"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6"} Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.502155 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxmq" event={"ID":"0cd223a4-37c2-4e55-9516-bc48c435b50c","Type":"ContainerDied","Data":"604f2f2ff8b464eaacc934db9131008fb807f8f7723bad18c38405e8b17d4ba9"} Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.502170 4758 scope.go:117] "RemoveContainer" containerID="ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.502292 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxmq" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.525708 4758 scope.go:117] "RemoveContainer" containerID="ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.547050 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.564734 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cjxmq"] Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.567543 4758 scope.go:117] "RemoveContainer" containerID="e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.603160 4758 scope.go:117] "RemoveContainer" containerID="ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" Jan 30 09:04:22 crc kubenswrapper[4758]: E0130 09:04:22.604883 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6\": container with ID starting with ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6 not found: ID does not exist" containerID="ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.604924 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6"} err="failed to get container status \"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6\": rpc error: code = NotFound desc = could not find container \"ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6\": container with ID starting with ec432275e1fa95d3496ca790f2781256a5023fa44b7195a978507263e74214b6 not found: ID does not exist" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.604961 4758 scope.go:117] "RemoveContainer" containerID="ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0" Jan 30 09:04:22 crc kubenswrapper[4758]: E0130 09:04:22.605308 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0\": container with ID starting with ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0 not found: ID does not exist" containerID="ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.605364 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0"} err="failed to get container status \"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0\": rpc error: code = NotFound desc = could not find container \"ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0\": container with ID starting with ab0c5f4f8bf52f8d8f2f44b5fb6c3972af0477eb5f7a23b6e4a9c7c3723b22b0 not found: ID does not exist" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.605388 4758 scope.go:117] "RemoveContainer" containerID="e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598" Jan 30 09:04:22 crc kubenswrapper[4758]: E0130 09:04:22.605757 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598\": container with ID starting with e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598 not found: ID does not exist" containerID="e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598" Jan 30 09:04:22 crc kubenswrapper[4758]: I0130 09:04:22.605788 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598"} err="failed to get container status \"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598\": rpc error: code = NotFound desc = could not find container \"e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598\": container with ID starting with e532ab87d28a2eddba80c7fd9035010b0293d4d57e947d603a867b7ccf6d3598 not found: ID does not exist" Jan 30 09:04:23 crc kubenswrapper[4758]: I0130 09:04:23.778289 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" path="/var/lib/kubelet/pods/0cd223a4-37c2-4e55-9516-bc48c435b50c/volumes" Jan 30 09:04:43 crc kubenswrapper[4758]: I0130 09:04:43.659885 4758 generic.go:334] "Generic (PLEG): container finished" podID="a282c0aa-8c3d-4a78-9fd6-1971701a1158" containerID="2c32173feef6cb9a2c38285cc27663ba19215a548a7eac139d2cd7b311a40fd7" exitCode=0 Jan 30 09:04:43 crc kubenswrapper[4758]: I0130 09:04:43.660026 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" event={"ID":"a282c0aa-8c3d-4a78-9fd6-1971701a1158","Type":"ContainerDied","Data":"2c32173feef6cb9a2c38285cc27663ba19215a548a7eac139d2cd7b311a40fd7"} Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.084953 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.280964 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") pod \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.281055 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") pod \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.281234 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") pod \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\" (UID: \"a282c0aa-8c3d-4a78-9fd6-1971701a1158\") " Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.288017 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2" (OuterVolumeSpecName: "kube-api-access-c4dm2") pod "a282c0aa-8c3d-4a78-9fd6-1971701a1158" (UID: "a282c0aa-8c3d-4a78-9fd6-1971701a1158"). InnerVolumeSpecName "kube-api-access-c4dm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.312596 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a282c0aa-8c3d-4a78-9fd6-1971701a1158" (UID: "a282c0aa-8c3d-4a78-9fd6-1971701a1158"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.317878 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory" (OuterVolumeSpecName: "inventory") pod "a282c0aa-8c3d-4a78-9fd6-1971701a1158" (UID: "a282c0aa-8c3d-4a78-9fd6-1971701a1158"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.383032 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.383077 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4dm2\" (UniqueName: \"kubernetes.io/projected/a282c0aa-8c3d-4a78-9fd6-1971701a1158-kube-api-access-c4dm2\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.383088 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a282c0aa-8c3d-4a78-9fd6-1971701a1158-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.679599 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" event={"ID":"a282c0aa-8c3d-4a78-9fd6-1971701a1158","Type":"ContainerDied","Data":"22d6485e6d1373e9c1fd9189afdc079b81c1850b7190e78e117073ff593293a2"} Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.679646 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22d6485e6d1373e9c1fd9189afdc079b81c1850b7190e78e117073ff593293a2" Jan 30 09:04:45 crc kubenswrapper[4758]: I0130 09:04:45.679678 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197265 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r"] Jan 30 09:04:46 crc kubenswrapper[4758]: E0130 09:04:46.197877 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197889 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" Jan 30 09:04:46 crc kubenswrapper[4758]: E0130 09:04:46.197923 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="extract-content" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197929 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="extract-content" Jan 30 09:04:46 crc kubenswrapper[4758]: E0130 09:04:46.197945 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="extract-utilities" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197951 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="extract-utilities" Jan 30 09:04:46 crc kubenswrapper[4758]: E0130 09:04:46.197962 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a282c0aa-8c3d-4a78-9fd6-1971701a1158" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.197969 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a282c0aa-8c3d-4a78-9fd6-1971701a1158" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.198149 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd223a4-37c2-4e55-9516-bc48c435b50c" containerName="registry-server" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.198189 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a282c0aa-8c3d-4a78-9fd6-1971701a1158" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.198752 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.201599 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.201636 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.201840 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.201872 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.227876 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r"] Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.402317 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.402690 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.402729 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.504790 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.504852 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.504932 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.515598 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.515823 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.566842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-55r6r\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:46 crc kubenswrapper[4758]: I0130 09:04:46.821303 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:47 crc kubenswrapper[4758]: I0130 09:04:47.457488 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r"] Jan 30 09:04:47 crc kubenswrapper[4758]: I0130 09:04:47.699133 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" event={"ID":"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83","Type":"ContainerStarted","Data":"d352897def0b617952e7c67684ba624775564f16c5a6496bda83f806b89314ef"} Jan 30 09:04:49 crc kubenswrapper[4758]: I0130 09:04:49.726289 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" event={"ID":"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83","Type":"ContainerStarted","Data":"18998ab7144638e9b2dfc92ec7bcdccca85b06ffdbc64b6bcf7bb049fb552fc0"} Jan 30 09:04:49 crc kubenswrapper[4758]: I0130 09:04:49.746295 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" podStartSLOduration=1.9269244890000001 podStartE2EDuration="3.746275918s" podCreationTimestamp="2026-01-30 09:04:46 +0000 UTC" firstStartedPulling="2026-01-30 09:04:47.46866209 +0000 UTC m=+2092.440973641" lastFinishedPulling="2026-01-30 09:04:49.288013519 +0000 UTC m=+2094.260325070" observedRunningTime="2026-01-30 09:04:49.739220229 +0000 UTC m=+2094.711531780" watchObservedRunningTime="2026-01-30 09:04:49.746275918 +0000 UTC m=+2094.718587469" Jan 30 09:04:55 crc kubenswrapper[4758]: I0130 09:04:55.812065 4758 generic.go:334] "Generic (PLEG): container finished" podID="21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" containerID="18998ab7144638e9b2dfc92ec7bcdccca85b06ffdbc64b6bcf7bb049fb552fc0" exitCode=0 Jan 30 09:04:55 crc kubenswrapper[4758]: I0130 09:04:55.812690 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" event={"ID":"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83","Type":"ContainerDied","Data":"18998ab7144638e9b2dfc92ec7bcdccca85b06ffdbc64b6bcf7bb049fb552fc0"} Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.323759 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.495735 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") pod \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.496188 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") pod \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.496404 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") pod \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\" (UID: \"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83\") " Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.522180 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg" (OuterVolumeSpecName: "kube-api-access-ndddg") pod "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" (UID: "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83"). InnerVolumeSpecName "kube-api-access-ndddg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.527618 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" (UID: "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.529604 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory" (OuterVolumeSpecName: "inventory") pod "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" (UID: "21db3b55-b11b-4ca5-a2d0-676ec4e6fb83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.598746 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndddg\" (UniqueName: \"kubernetes.io/projected/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-kube-api-access-ndddg\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.599015 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.599160 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21db3b55-b11b-4ca5-a2d0-676ec4e6fb83-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.828557 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" event={"ID":"21db3b55-b11b-4ca5-a2d0-676ec4e6fb83","Type":"ContainerDied","Data":"d352897def0b617952e7c67684ba624775564f16c5a6496bda83f806b89314ef"} Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.828780 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d352897def0b617952e7c67684ba624775564f16c5a6496bda83f806b89314ef" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.828622 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-55r6r" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.932245 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8"] Jan 30 09:04:57 crc kubenswrapper[4758]: E0130 09:04:57.932628 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.932646 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.932843 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="21db3b55-b11b-4ca5-a2d0-676ec4e6fb83" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.933565 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.937767 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.938186 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.938350 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.938514 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:04:57 crc kubenswrapper[4758]: I0130 09:04:57.965953 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8"] Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.005682 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.005838 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.005883 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.107532 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.107601 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.107681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.112906 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.113706 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.128681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rtws8\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.253711 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:04:58 crc kubenswrapper[4758]: I0130 09:04:58.886805 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8"] Jan 30 09:04:59 crc kubenswrapper[4758]: I0130 09:04:59.843807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" event={"ID":"39adf6a6-10cd-412d-aca4-3c68ddcf8887","Type":"ContainerStarted","Data":"b23bf407896421a035b030f60a01b93a3691c0ab6b6ddf66ca1f15e3a0584fb0"} Jan 30 09:04:59 crc kubenswrapper[4758]: I0130 09:04:59.844167 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" event={"ID":"39adf6a6-10cd-412d-aca4-3c68ddcf8887","Type":"ContainerStarted","Data":"ef6efd355e0b4c3febb5707c62cbcfd772fd1e6a6f9bafd7965c6c7bab5ba343"} Jan 30 09:04:59 crc kubenswrapper[4758]: I0130 09:04:59.872736 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" podStartSLOduration=2.455770913 podStartE2EDuration="2.87271257s" podCreationTimestamp="2026-01-30 09:04:57 +0000 UTC" firstStartedPulling="2026-01-30 09:04:58.894709327 +0000 UTC m=+2103.867020878" lastFinishedPulling="2026-01-30 09:04:59.311650984 +0000 UTC m=+2104.283962535" observedRunningTime="2026-01-30 09:04:59.86077432 +0000 UTC m=+2104.833085871" watchObservedRunningTime="2026-01-30 09:04:59.87271257 +0000 UTC m=+2104.845024121" Jan 30 09:05:22 crc kubenswrapper[4758]: I0130 09:05:22.387908 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:05:22 crc kubenswrapper[4758]: I0130 09:05:22.389054 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:05:46 crc kubenswrapper[4758]: I0130 09:05:46.554978 4758 generic.go:334] "Generic (PLEG): container finished" podID="39adf6a6-10cd-412d-aca4-3c68ddcf8887" containerID="b23bf407896421a035b030f60a01b93a3691c0ab6b6ddf66ca1f15e3a0584fb0" exitCode=0 Jan 30 09:05:46 crc kubenswrapper[4758]: I0130 09:05:46.555521 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" event={"ID":"39adf6a6-10cd-412d-aca4-3c68ddcf8887","Type":"ContainerDied","Data":"b23bf407896421a035b030f60a01b93a3691c0ab6b6ddf66ca1f15e3a0584fb0"} Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.002598 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.130289 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") pod \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.130363 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") pod \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.130555 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") pod \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\" (UID: \"39adf6a6-10cd-412d-aca4-3c68ddcf8887\") " Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.135951 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb" (OuterVolumeSpecName: "kube-api-access-9rgvb") pod "39adf6a6-10cd-412d-aca4-3c68ddcf8887" (UID: "39adf6a6-10cd-412d-aca4-3c68ddcf8887"). InnerVolumeSpecName "kube-api-access-9rgvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.160308 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39adf6a6-10cd-412d-aca4-3c68ddcf8887" (UID: "39adf6a6-10cd-412d-aca4-3c68ddcf8887"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.169116 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory" (OuterVolumeSpecName: "inventory") pod "39adf6a6-10cd-412d-aca4-3c68ddcf8887" (UID: "39adf6a6-10cd-412d-aca4-3c68ddcf8887"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.233936 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.236770 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39adf6a6-10cd-412d-aca4-3c68ddcf8887-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.236794 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rgvb\" (UniqueName: \"kubernetes.io/projected/39adf6a6-10cd-412d-aca4-3c68ddcf8887-kube-api-access-9rgvb\") on node \"crc\" DevicePath \"\"" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.573174 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" event={"ID":"39adf6a6-10cd-412d-aca4-3c68ddcf8887","Type":"ContainerDied","Data":"ef6efd355e0b4c3febb5707c62cbcfd772fd1e6a6f9bafd7965c6c7bab5ba343"} Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.573240 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6efd355e0b4c3febb5707c62cbcfd772fd1e6a6f9bafd7965c6c7bab5ba343" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.573245 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rtws8" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.683791 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x"] Jan 30 09:05:48 crc kubenswrapper[4758]: E0130 09:05:48.684288 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39adf6a6-10cd-412d-aca4-3c68ddcf8887" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.684314 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="39adf6a6-10cd-412d-aca4-3c68ddcf8887" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.684572 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="39adf6a6-10cd-412d-aca4-3c68ddcf8887" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.685389 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.688193 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.688243 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.688683 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.691224 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.693818 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x"] Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.848289 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.848573 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.848626 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.951176 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.951865 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.952535 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.959212 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.961516 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:48 crc kubenswrapper[4758]: I0130 09:05:48.972178 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-95b4x\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:49 crc kubenswrapper[4758]: I0130 09:05:49.034132 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:05:49 crc kubenswrapper[4758]: I0130 09:05:49.611618 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x"] Jan 30 09:05:50 crc kubenswrapper[4758]: I0130 09:05:50.589033 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" event={"ID":"b5727310-9e0b-40f5-ae4e-209ed7d3ee36","Type":"ContainerStarted","Data":"5793e1001e78c7d540918afb3a8f95ec2302b1ab81096461a7cfe99d2adc1bb6"} Jan 30 09:05:50 crc kubenswrapper[4758]: I0130 09:05:50.589420 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" event={"ID":"b5727310-9e0b-40f5-ae4e-209ed7d3ee36","Type":"ContainerStarted","Data":"ae501280db69d617245184bac11d4250b957b0ded8237a8688a17ead9f427506"} Jan 30 09:05:50 crc kubenswrapper[4758]: I0130 09:05:50.605520 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" podStartSLOduration=2.041347269 podStartE2EDuration="2.605504078s" podCreationTimestamp="2026-01-30 09:05:48 +0000 UTC" firstStartedPulling="2026-01-30 09:05:49.626200924 +0000 UTC m=+2154.598512475" lastFinishedPulling="2026-01-30 09:05:50.190357733 +0000 UTC m=+2155.162669284" observedRunningTime="2026-01-30 09:05:50.603608609 +0000 UTC m=+2155.575920160" watchObservedRunningTime="2026-01-30 09:05:50.605504078 +0000 UTC m=+2155.577815629" Jan 30 09:05:52 crc kubenswrapper[4758]: I0130 09:05:52.387805 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:05:52 crc kubenswrapper[4758]: I0130 09:05:52.388162 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.387320 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.387895 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.387936 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.388669 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.388719 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" gracePeriod=600 Jan 30 09:06:22 crc kubenswrapper[4758]: E0130 09:06:22.533286 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.847822 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" exitCode=0 Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.847887 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41"} Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.848089 4758 scope.go:117] "RemoveContainer" containerID="a4e6da0cc99379149fae03030aa7bdc7f1cee2816f3c69e88906bfdf6d6dfda0" Jan 30 09:06:22 crc kubenswrapper[4758]: I0130 09:06:22.848779 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:06:22 crc kubenswrapper[4758]: E0130 09:06:22.849075 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:06:33 crc kubenswrapper[4758]: I0130 09:06:33.769265 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:06:33 crc kubenswrapper[4758]: E0130 09:06:33.773598 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:06:38 crc kubenswrapper[4758]: I0130 09:06:38.977892 4758 generic.go:334] "Generic (PLEG): container finished" podID="b5727310-9e0b-40f5-ae4e-209ed7d3ee36" containerID="5793e1001e78c7d540918afb3a8f95ec2302b1ab81096461a7cfe99d2adc1bb6" exitCode=0 Jan 30 09:06:38 crc kubenswrapper[4758]: I0130 09:06:38.978475 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" event={"ID":"b5727310-9e0b-40f5-ae4e-209ed7d3ee36","Type":"ContainerDied","Data":"5793e1001e78c7d540918afb3a8f95ec2302b1ab81096461a7cfe99d2adc1bb6"} Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.407621 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.590796 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") pod \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.590917 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") pod \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.590992 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") pod \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\" (UID: \"b5727310-9e0b-40f5-ae4e-209ed7d3ee36\") " Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.597235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv" (OuterVolumeSpecName: "kube-api-access-zz7sv") pod "b5727310-9e0b-40f5-ae4e-209ed7d3ee36" (UID: "b5727310-9e0b-40f5-ae4e-209ed7d3ee36"). InnerVolumeSpecName "kube-api-access-zz7sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.621080 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory" (OuterVolumeSpecName: "inventory") pod "b5727310-9e0b-40f5-ae4e-209ed7d3ee36" (UID: "b5727310-9e0b-40f5-ae4e-209ed7d3ee36"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.638941 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5727310-9e0b-40f5-ae4e-209ed7d3ee36" (UID: "b5727310-9e0b-40f5-ae4e-209ed7d3ee36"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.693731 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.693764 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.693776 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz7sv\" (UniqueName: \"kubernetes.io/projected/b5727310-9e0b-40f5-ae4e-209ed7d3ee36-kube-api-access-zz7sv\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.994113 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" event={"ID":"b5727310-9e0b-40f5-ae4e-209ed7d3ee36","Type":"ContainerDied","Data":"ae501280db69d617245184bac11d4250b957b0ded8237a8688a17ead9f427506"} Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.994156 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae501280db69d617245184bac11d4250b957b0ded8237a8688a17ead9f427506" Jan 30 09:06:40 crc kubenswrapper[4758]: I0130 09:06:40.994201 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-95b4x" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.237349 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wlb6p"] Jan 30 09:06:41 crc kubenswrapper[4758]: E0130 09:06:41.238112 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5727310-9e0b-40f5-ae4e-209ed7d3ee36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.238154 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5727310-9e0b-40f5-ae4e-209ed7d3ee36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.238463 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5727310-9e0b-40f5-ae4e-209ed7d3ee36" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.241373 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.250124 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.250862 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.253877 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.254549 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.255419 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.255539 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.255682 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.258024 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wlb6p"] Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.359186 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.359323 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.359429 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.363374 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.365072 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.380030 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") pod \"ssh-known-hosts-edpm-deployment-wlb6p\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:41 crc kubenswrapper[4758]: I0130 09:06:41.567423 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:42 crc kubenswrapper[4758]: I0130 09:06:42.067148 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wlb6p"] Jan 30 09:06:43 crc kubenswrapper[4758]: I0130 09:06:43.010376 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" event={"ID":"1135df14-5d2f-47ff-9038-fce2addc71d0","Type":"ContainerStarted","Data":"f745e166067f02a1df0dfdb9c6ca2f5f3ef50a342ff6043a33e62e116963b93f"} Jan 30 09:06:43 crc kubenswrapper[4758]: I0130 09:06:43.011711 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" event={"ID":"1135df14-5d2f-47ff-9038-fce2addc71d0","Type":"ContainerStarted","Data":"b68ff98e80b33635b3d571d2ad2075a64410d86b11bdbe49b49e782727e62910"} Jan 30 09:06:43 crc kubenswrapper[4758]: I0130 09:06:43.031089 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" podStartSLOduration=1.5333261679999999 podStartE2EDuration="2.031017649s" podCreationTimestamp="2026-01-30 09:06:41 +0000 UTC" firstStartedPulling="2026-01-30 09:06:42.076931195 +0000 UTC m=+2207.049242756" lastFinishedPulling="2026-01-30 09:06:42.574622686 +0000 UTC m=+2207.546934237" observedRunningTime="2026-01-30 09:06:43.026610563 +0000 UTC m=+2207.998922124" watchObservedRunningTime="2026-01-30 09:06:43.031017649 +0000 UTC m=+2208.003329200" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.373781 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.375957 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.428408 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.518968 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.519027 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.519089 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.620825 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.620889 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.620939 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.621442 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.621543 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.647437 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") pod \"certified-operators-lqb9z\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:44 crc kubenswrapper[4758]: I0130 09:06:44.701032 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:45 crc kubenswrapper[4758]: I0130 09:06:45.231292 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:46 crc kubenswrapper[4758]: I0130 09:06:46.034224 4758 generic.go:334] "Generic (PLEG): container finished" podID="805974e6-14ab-447b-a106-e126ae4d93fa" containerID="8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd" exitCode=0 Jan 30 09:06:46 crc kubenswrapper[4758]: I0130 09:06:46.034292 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerDied","Data":"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd"} Jan 30 09:06:46 crc kubenswrapper[4758]: I0130 09:06:46.035676 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerStarted","Data":"08100a81bfc18cc43a5cb8fab19ef90b5843598a774b1d588c15073d0cabb9f4"} Jan 30 09:06:46 crc kubenswrapper[4758]: I0130 09:06:46.769123 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:06:46 crc kubenswrapper[4758]: E0130 09:06:46.769876 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:06:47 crc kubenswrapper[4758]: I0130 09:06:47.044419 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerStarted","Data":"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce"} Jan 30 09:06:48 crc kubenswrapper[4758]: I0130 09:06:48.055189 4758 generic.go:334] "Generic (PLEG): container finished" podID="805974e6-14ab-447b-a106-e126ae4d93fa" containerID="3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce" exitCode=0 Jan 30 09:06:48 crc kubenswrapper[4758]: I0130 09:06:48.055261 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerDied","Data":"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce"} Jan 30 09:06:49 crc kubenswrapper[4758]: I0130 09:06:49.066657 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerStarted","Data":"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3"} Jan 30 09:06:50 crc kubenswrapper[4758]: I0130 09:06:50.082748 4758 generic.go:334] "Generic (PLEG): container finished" podID="1135df14-5d2f-47ff-9038-fce2addc71d0" containerID="f745e166067f02a1df0dfdb9c6ca2f5f3ef50a342ff6043a33e62e116963b93f" exitCode=0 Jan 30 09:06:50 crc kubenswrapper[4758]: I0130 09:06:50.083161 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" event={"ID":"1135df14-5d2f-47ff-9038-fce2addc71d0","Type":"ContainerDied","Data":"f745e166067f02a1df0dfdb9c6ca2f5f3ef50a342ff6043a33e62e116963b93f"} Jan 30 09:06:50 crc kubenswrapper[4758]: I0130 09:06:50.114178 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lqb9z" podStartSLOduration=3.692465323 podStartE2EDuration="6.114150151s" podCreationTimestamp="2026-01-30 09:06:44 +0000 UTC" firstStartedPulling="2026-01-30 09:06:46.036299459 +0000 UTC m=+2211.008611010" lastFinishedPulling="2026-01-30 09:06:48.457984287 +0000 UTC m=+2213.430295838" observedRunningTime="2026-01-30 09:06:49.090158293 +0000 UTC m=+2214.062469854" watchObservedRunningTime="2026-01-30 09:06:50.114150151 +0000 UTC m=+2215.086461722" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.523319 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.661830 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") pod \"1135df14-5d2f-47ff-9038-fce2addc71d0\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.661926 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") pod \"1135df14-5d2f-47ff-9038-fce2addc71d0\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.662076 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") pod \"1135df14-5d2f-47ff-9038-fce2addc71d0\" (UID: \"1135df14-5d2f-47ff-9038-fce2addc71d0\") " Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.669951 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf" (OuterVolumeSpecName: "kube-api-access-4jslf") pod "1135df14-5d2f-47ff-9038-fce2addc71d0" (UID: "1135df14-5d2f-47ff-9038-fce2addc71d0"). InnerVolumeSpecName "kube-api-access-4jslf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.690741 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1135df14-5d2f-47ff-9038-fce2addc71d0" (UID: "1135df14-5d2f-47ff-9038-fce2addc71d0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.692657 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1135df14-5d2f-47ff-9038-fce2addc71d0" (UID: "1135df14-5d2f-47ff-9038-fce2addc71d0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.763889 4758 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.763932 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jslf\" (UniqueName: \"kubernetes.io/projected/1135df14-5d2f-47ff-9038-fce2addc71d0-kube-api-access-4jslf\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:51 crc kubenswrapper[4758]: I0130 09:06:51.763948 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1135df14-5d2f-47ff-9038-fce2addc71d0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.131980 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" event={"ID":"1135df14-5d2f-47ff-9038-fce2addc71d0","Type":"ContainerDied","Data":"b68ff98e80b33635b3d571d2ad2075a64410d86b11bdbe49b49e782727e62910"} Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.132022 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b68ff98e80b33635b3d571d2ad2075a64410d86b11bdbe49b49e782727e62910" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.132096 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wlb6p" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.256966 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t"] Jan 30 09:06:52 crc kubenswrapper[4758]: E0130 09:06:52.257695 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1135df14-5d2f-47ff-9038-fce2addc71d0" containerName="ssh-known-hosts-edpm-deployment" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.257715 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="1135df14-5d2f-47ff-9038-fce2addc71d0" containerName="ssh-known-hosts-edpm-deployment" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.257968 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1135df14-5d2f-47ff-9038-fce2addc71d0" containerName="ssh-known-hosts-edpm-deployment" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.258734 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.261482 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.261624 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.261654 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.263896 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.322163 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t"] Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.408588 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.408661 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.408701 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.510878 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.510967 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.511017 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.517886 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.534079 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.537601 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5nm8t\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:52 crc kubenswrapper[4758]: I0130 09:06:52.617411 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:06:53 crc kubenswrapper[4758]: I0130 09:06:53.156337 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t"] Jan 30 09:06:53 crc kubenswrapper[4758]: W0130 09:06:53.162307 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b0522c4_4143_4ac3_b5c7_ea6c073dfc38.slice/crio-a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43 WatchSource:0}: Error finding container a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43: Status 404 returned error can't find the container with id a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43 Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.148504 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" event={"ID":"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38","Type":"ContainerStarted","Data":"adb32b8102ded5b49f066a27ed9dfacb49df27b76fdbf55f0df66a6e3c6e4218"} Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.148875 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" event={"ID":"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38","Type":"ContainerStarted","Data":"a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43"} Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.169479 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" podStartSLOduration=1.7024343929999999 podStartE2EDuration="2.169457404s" podCreationTimestamp="2026-01-30 09:06:52 +0000 UTC" firstStartedPulling="2026-01-30 09:06:53.166459297 +0000 UTC m=+2218.138770848" lastFinishedPulling="2026-01-30 09:06:53.633482288 +0000 UTC m=+2218.605793859" observedRunningTime="2026-01-30 09:06:54.163359246 +0000 UTC m=+2219.135670797" watchObservedRunningTime="2026-01-30 09:06:54.169457404 +0000 UTC m=+2219.141768965" Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.701680 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.702022 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:54 crc kubenswrapper[4758]: I0130 09:06:54.748109 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:55 crc kubenswrapper[4758]: I0130 09:06:55.218430 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:55 crc kubenswrapper[4758]: I0130 09:06:55.266881 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.173421 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lqb9z" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="registry-server" containerID="cri-o://4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" gracePeriod=2 Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.742678 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.827120 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") pod \"805974e6-14ab-447b-a106-e126ae4d93fa\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.827327 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") pod \"805974e6-14ab-447b-a106-e126ae4d93fa\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.827359 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") pod \"805974e6-14ab-447b-a106-e126ae4d93fa\" (UID: \"805974e6-14ab-447b-a106-e126ae4d93fa\") " Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.828211 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities" (OuterVolumeSpecName: "utilities") pod "805974e6-14ab-447b-a106-e126ae4d93fa" (UID: "805974e6-14ab-447b-a106-e126ae4d93fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.834396 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2" (OuterVolumeSpecName: "kube-api-access-fzmz2") pod "805974e6-14ab-447b-a106-e126ae4d93fa" (UID: "805974e6-14ab-447b-a106-e126ae4d93fa"). InnerVolumeSpecName "kube-api-access-fzmz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.889178 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "805974e6-14ab-447b-a106-e126ae4d93fa" (UID: "805974e6-14ab-447b-a106-e126ae4d93fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.931019 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.931147 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzmz2\" (UniqueName: \"kubernetes.io/projected/805974e6-14ab-447b-a106-e126ae4d93fa-kube-api-access-fzmz2\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:57 crc kubenswrapper[4758]: I0130 09:06:57.931163 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805974e6-14ab-447b-a106-e126ae4d93fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184089 4758 generic.go:334] "Generic (PLEG): container finished" podID="805974e6-14ab-447b-a106-e126ae4d93fa" containerID="4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" exitCode=0 Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184132 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerDied","Data":"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3"} Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184164 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lqb9z" event={"ID":"805974e6-14ab-447b-a106-e126ae4d93fa","Type":"ContainerDied","Data":"08100a81bfc18cc43a5cb8fab19ef90b5843598a774b1d588c15073d0cabb9f4"} Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184184 4758 scope.go:117] "RemoveContainer" containerID="4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.184324 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lqb9z" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.228287 4758 scope.go:117] "RemoveContainer" containerID="3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.237480 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.246594 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lqb9z"] Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.255256 4758 scope.go:117] "RemoveContainer" containerID="8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.302318 4758 scope.go:117] "RemoveContainer" containerID="4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" Jan 30 09:06:58 crc kubenswrapper[4758]: E0130 09:06:58.303296 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3\": container with ID starting with 4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3 not found: ID does not exist" containerID="4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.303367 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3"} err="failed to get container status \"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3\": rpc error: code = NotFound desc = could not find container \"4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3\": container with ID starting with 4dc615b8ec7ee985200cca4730b6b01a9f8b0742764965267f365c9ffa07d9c3 not found: ID does not exist" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.303401 4758 scope.go:117] "RemoveContainer" containerID="3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce" Jan 30 09:06:58 crc kubenswrapper[4758]: E0130 09:06:58.304398 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce\": container with ID starting with 3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce not found: ID does not exist" containerID="3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.304432 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce"} err="failed to get container status \"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce\": rpc error: code = NotFound desc = could not find container \"3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce\": container with ID starting with 3bb624e2006d62e4fa3f943abe71c8b268a422ad64131bb1d2bb2754d97a53ce not found: ID does not exist" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.304452 4758 scope.go:117] "RemoveContainer" containerID="8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd" Jan 30 09:06:58 crc kubenswrapper[4758]: E0130 09:06:58.304872 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd\": container with ID starting with 8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd not found: ID does not exist" containerID="8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd" Jan 30 09:06:58 crc kubenswrapper[4758]: I0130 09:06:58.304908 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd"} err="failed to get container status \"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd\": rpc error: code = NotFound desc = could not find container \"8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd\": container with ID starting with 8a83d24153a9831f3a83f83def9728a612069c337acb365dba158ab12403accd not found: ID does not exist" Jan 30 09:06:59 crc kubenswrapper[4758]: I0130 09:06:59.779591 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" path="/var/lib/kubelet/pods/805974e6-14ab-447b-a106-e126ae4d93fa/volumes" Jan 30 09:07:01 crc kubenswrapper[4758]: I0130 09:07:01.775715 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:01 crc kubenswrapper[4758]: E0130 09:07:01.776564 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:02 crc kubenswrapper[4758]: I0130 09:07:02.220084 4758 generic.go:334] "Generic (PLEG): container finished" podID="2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" containerID="adb32b8102ded5b49f066a27ed9dfacb49df27b76fdbf55f0df66a6e3c6e4218" exitCode=0 Jan 30 09:07:02 crc kubenswrapper[4758]: I0130 09:07:02.220145 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" event={"ID":"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38","Type":"ContainerDied","Data":"adb32b8102ded5b49f066a27ed9dfacb49df27b76fdbf55f0df66a6e3c6e4218"} Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.614852 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.738394 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") pod \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.738520 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") pod \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.739392 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") pod \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\" (UID: \"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38\") " Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.743670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4" (OuterVolumeSpecName: "kube-api-access-qddm4") pod "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" (UID: "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38"). InnerVolumeSpecName "kube-api-access-qddm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.764644 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" (UID: "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.773200 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory" (OuterVolumeSpecName: "inventory") pod "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" (UID: "2b0522c4-4143-4ac3-b5c7-ea6c073dfc38"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.841554 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.841587 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qddm4\" (UniqueName: \"kubernetes.io/projected/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-kube-api-access-qddm4\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:03 crc kubenswrapper[4758]: I0130 09:07:03.841604 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b0522c4-4143-4ac3-b5c7-ea6c073dfc38-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.237807 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" event={"ID":"2b0522c4-4143-4ac3-b5c7-ea6c073dfc38","Type":"ContainerDied","Data":"a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43"} Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.238104 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6020fe2ceeedef7d16c580ed1a3dac97c3b235fc2fecdcb4c27de07b2352c43" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.237870 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5nm8t" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.318750 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn"] Jan 30 09:07:04 crc kubenswrapper[4758]: E0130 09:07:04.319236 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="registry-server" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319251 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="registry-server" Jan 30 09:07:04 crc kubenswrapper[4758]: E0130 09:07:04.319277 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="extract-utilities" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319285 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="extract-utilities" Jan 30 09:07:04 crc kubenswrapper[4758]: E0130 09:07:04.319304 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319313 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:04 crc kubenswrapper[4758]: E0130 09:07:04.319331 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="extract-content" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319340 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="extract-content" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319671 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="805974e6-14ab-447b-a106-e126ae4d93fa" containerName="registry-server" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.319684 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0522c4-4143-4ac3-b5c7-ea6c073dfc38" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.320436 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.322780 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.324868 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.326363 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.330447 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.333177 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn"] Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.351102 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.351160 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.351302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.453355 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.453593 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.453744 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.458906 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.465680 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.473086 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:04 crc kubenswrapper[4758]: I0130 09:07:04.646711 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:05 crc kubenswrapper[4758]: I0130 09:07:05.154692 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn"] Jan 30 09:07:05 crc kubenswrapper[4758]: I0130 09:07:05.251377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" event={"ID":"49ef08a8-e5d8-4a62-bc4d-227948c7fa12","Type":"ContainerStarted","Data":"13618976f3c2154c8026ea564ec1c91778bede8278c88a0f4bb4dc6df457d816"} Jan 30 09:07:06 crc kubenswrapper[4758]: I0130 09:07:06.260126 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" event={"ID":"49ef08a8-e5d8-4a62-bc4d-227948c7fa12","Type":"ContainerStarted","Data":"4e7755a7692a4a0ab57a1634916ed6268ef5b07f500e55f0e1b24eb2510f3a70"} Jan 30 09:07:06 crc kubenswrapper[4758]: I0130 09:07:06.276032 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" podStartSLOduration=1.842255617 podStartE2EDuration="2.276007988s" podCreationTimestamp="2026-01-30 09:07:04 +0000 UTC" firstStartedPulling="2026-01-30 09:07:05.153959014 +0000 UTC m=+2230.126270565" lastFinishedPulling="2026-01-30 09:07:05.587711385 +0000 UTC m=+2230.560022936" observedRunningTime="2026-01-30 09:07:06.274829731 +0000 UTC m=+2231.247141292" watchObservedRunningTime="2026-01-30 09:07:06.276007988 +0000 UTC m=+2231.248319549" Jan 30 09:07:14 crc kubenswrapper[4758]: I0130 09:07:14.768134 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:14 crc kubenswrapper[4758]: E0130 09:07:14.769813 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:15 crc kubenswrapper[4758]: I0130 09:07:15.346008 4758 generic.go:334] "Generic (PLEG): container finished" podID="49ef08a8-e5d8-4a62-bc4d-227948c7fa12" containerID="4e7755a7692a4a0ab57a1634916ed6268ef5b07f500e55f0e1b24eb2510f3a70" exitCode=0 Jan 30 09:07:15 crc kubenswrapper[4758]: I0130 09:07:15.346108 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" event={"ID":"49ef08a8-e5d8-4a62-bc4d-227948c7fa12","Type":"ContainerDied","Data":"4e7755a7692a4a0ab57a1634916ed6268ef5b07f500e55f0e1b24eb2510f3a70"} Jan 30 09:07:16 crc kubenswrapper[4758]: I0130 09:07:16.982696 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.101103 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") pod \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.101219 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") pod \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.101325 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") pod \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\" (UID: \"49ef08a8-e5d8-4a62-bc4d-227948c7fa12\") " Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.107269 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff" (OuterVolumeSpecName: "kube-api-access-p4rff") pod "49ef08a8-e5d8-4a62-bc4d-227948c7fa12" (UID: "49ef08a8-e5d8-4a62-bc4d-227948c7fa12"). InnerVolumeSpecName "kube-api-access-p4rff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.130689 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49ef08a8-e5d8-4a62-bc4d-227948c7fa12" (UID: "49ef08a8-e5d8-4a62-bc4d-227948c7fa12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.137848 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory" (OuterVolumeSpecName: "inventory") pod "49ef08a8-e5d8-4a62-bc4d-227948c7fa12" (UID: "49ef08a8-e5d8-4a62-bc4d-227948c7fa12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.204191 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.204237 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rff\" (UniqueName: \"kubernetes.io/projected/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-kube-api-access-p4rff\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.204247 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49ef08a8-e5d8-4a62-bc4d-227948c7fa12-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.365215 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" event={"ID":"49ef08a8-e5d8-4a62-bc4d-227948c7fa12","Type":"ContainerDied","Data":"13618976f3c2154c8026ea564ec1c91778bede8278c88a0f4bb4dc6df457d816"} Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.365760 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13618976f3c2154c8026ea564ec1c91778bede8278c88a0f4bb4dc6df457d816" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.365303 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.479941 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq"] Jan 30 09:07:17 crc kubenswrapper[4758]: E0130 09:07:17.480442 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ef08a8-e5d8-4a62-bc4d-227948c7fa12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.480466 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ef08a8-e5d8-4a62-bc4d-227948c7fa12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.480672 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ef08a8-e5d8-4a62-bc4d-227948c7fa12" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.481325 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.483571 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.484758 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.484939 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.485166 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.485932 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.485984 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.486133 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.486295 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.513464 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq"] Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.614807 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.614894 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.614946 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.614971 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615004 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615027 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615067 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615094 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615134 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615157 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615177 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615206 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615227 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.615251 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717574 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717628 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717673 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717694 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717715 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.717745 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718148 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718508 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718538 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718580 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718606 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718650 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718709 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.718774 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.723138 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.723336 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.725850 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.726420 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.726606 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.726974 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.727952 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.728064 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.728158 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.728503 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.728922 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.730393 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.732138 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.736669 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4rccq\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:17 crc kubenswrapper[4758]: I0130 09:07:17.798309 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:18 crc kubenswrapper[4758]: I0130 09:07:18.314917 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq"] Jan 30 09:07:18 crc kubenswrapper[4758]: I0130 09:07:18.379537 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" event={"ID":"836593cc-2b98-4f54-8407-6d92687559f5","Type":"ContainerStarted","Data":"da639f6e610dd22a1e9e3a2924b8ba74eae6e646235058c322474a41909bb605"} Jan 30 09:07:19 crc kubenswrapper[4758]: I0130 09:07:19.388776 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" event={"ID":"836593cc-2b98-4f54-8407-6d92687559f5","Type":"ContainerStarted","Data":"1a4a56b9bf7992228fe58f7b359b41903dfb2553058e5d89e4cee6d5834254c2"} Jan 30 09:07:19 crc kubenswrapper[4758]: I0130 09:07:19.408549 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" podStartSLOduration=2.013281431 podStartE2EDuration="2.40853058s" podCreationTimestamp="2026-01-30 09:07:17 +0000 UTC" firstStartedPulling="2026-01-30 09:07:18.319018323 +0000 UTC m=+2243.291329874" lastFinishedPulling="2026-01-30 09:07:18.714267482 +0000 UTC m=+2243.686579023" observedRunningTime="2026-01-30 09:07:19.405672122 +0000 UTC m=+2244.377983673" watchObservedRunningTime="2026-01-30 09:07:19.40853058 +0000 UTC m=+2244.380842131" Jan 30 09:07:29 crc kubenswrapper[4758]: I0130 09:07:29.769432 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:29 crc kubenswrapper[4758]: E0130 09:07:29.770304 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:40 crc kubenswrapper[4758]: I0130 09:07:40.769671 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:40 crc kubenswrapper[4758]: E0130 09:07:40.770682 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:55 crc kubenswrapper[4758]: I0130 09:07:55.777943 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:07:55 crc kubenswrapper[4758]: E0130 09:07:55.778737 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:07:56 crc kubenswrapper[4758]: I0130 09:07:56.715319 4758 generic.go:334] "Generic (PLEG): container finished" podID="836593cc-2b98-4f54-8407-6d92687559f5" containerID="1a4a56b9bf7992228fe58f7b359b41903dfb2553058e5d89e4cee6d5834254c2" exitCode=0 Jan 30 09:07:56 crc kubenswrapper[4758]: I0130 09:07:56.715368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" event={"ID":"836593cc-2b98-4f54-8407-6d92687559f5","Type":"ContainerDied","Data":"1a4a56b9bf7992228fe58f7b359b41903dfb2553058e5d89e4cee6d5834254c2"} Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.216502 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379373 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379450 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379522 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379569 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379592 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379633 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379669 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379700 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.379730 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.380553 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.380958 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.381011 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.381365 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.381450 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") pod \"836593cc-2b98-4f54-8407-6d92687559f5\" (UID: \"836593cc-2b98-4f54-8407-6d92687559f5\") " Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.386614 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.388013 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.388865 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65" (OuterVolumeSpecName: "kube-api-access-8cw65") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "kube-api-access-8cw65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.391804 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.391837 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.392189 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.392885 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.392787 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.393929 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.404249 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.404257 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.412746 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.432764 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory" (OuterVolumeSpecName: "inventory") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.437314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "836593cc-2b98-4f54-8407-6d92687559f5" (UID: "836593cc-2b98-4f54-8407-6d92687559f5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484489 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484525 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484538 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484548 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484560 4758 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484568 4758 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484576 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484585 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484595 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484606 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484615 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484628 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cw65\" (UniqueName: \"kubernetes.io/projected/836593cc-2b98-4f54-8407-6d92687559f5-kube-api-access-8cw65\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484638 4758 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.484646 4758 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836593cc-2b98-4f54-8407-6d92687559f5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.734602 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" event={"ID":"836593cc-2b98-4f54-8407-6d92687559f5","Type":"ContainerDied","Data":"da639f6e610dd22a1e9e3a2924b8ba74eae6e646235058c322474a41909bb605"} Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.735030 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da639f6e610dd22a1e9e3a2924b8ba74eae6e646235058c322474a41909bb605" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.734970 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4rccq" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.880419 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l"] Jan 30 09:07:58 crc kubenswrapper[4758]: E0130 09:07:58.880811 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836593cc-2b98-4f54-8407-6d92687559f5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.880830 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="836593cc-2b98-4f54-8407-6d92687559f5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.881062 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="836593cc-2b98-4f54-8407-6d92687559f5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.881746 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.888352 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.888504 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.888595 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.888631 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.892756 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.921541 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l"] Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993573 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993631 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993663 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993747 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:58 crc kubenswrapper[4758]: I0130 09:07:58.993824 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.095844 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.095893 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.095918 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.096003 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.096122 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.098072 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.100396 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.109068 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.109451 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.113688 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fv46l\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.199845 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.735604 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l"] Jan 30 09:07:59 crc kubenswrapper[4758]: I0130 09:07:59.748435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" event={"ID":"9db7f310-f803-4981-8a55-5d45e9015488","Type":"ContainerStarted","Data":"43854d2e550b3a157e48ef29ea2036779fa7f76a1d19050140b07be00b088424"} Jan 30 09:08:00 crc kubenswrapper[4758]: I0130 09:08:00.757010 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" event={"ID":"9db7f310-f803-4981-8a55-5d45e9015488","Type":"ContainerStarted","Data":"37ef71ddd39ff51e9fb156dd6e3018baa774422d52cbb49086cb98d66a9bd3e4"} Jan 30 09:08:00 crc kubenswrapper[4758]: I0130 09:08:00.778790 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" podStartSLOduration=2.344092603 podStartE2EDuration="2.778765663s" podCreationTimestamp="2026-01-30 09:07:58 +0000 UTC" firstStartedPulling="2026-01-30 09:07:59.735953978 +0000 UTC m=+2284.708265529" lastFinishedPulling="2026-01-30 09:08:00.170627038 +0000 UTC m=+2285.142938589" observedRunningTime="2026-01-30 09:08:00.772170048 +0000 UTC m=+2285.744481609" watchObservedRunningTime="2026-01-30 09:08:00.778765663 +0000 UTC m=+2285.751077214" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.346230 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.348503 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.363527 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.447401 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.447486 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.447571 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.549483 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.549594 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.549716 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.550593 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.550873 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.601700 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") pod \"redhat-marketplace-bq4b9\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:01 crc kubenswrapper[4758]: I0130 09:08:01.667567 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:02 crc kubenswrapper[4758]: I0130 09:08:02.190199 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:02 crc kubenswrapper[4758]: I0130 09:08:02.773590 4758 generic.go:334] "Generic (PLEG): container finished" podID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerID="f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa" exitCode=0 Jan 30 09:08:02 crc kubenswrapper[4758]: I0130 09:08:02.773814 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerDied","Data":"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa"} Jan 30 09:08:02 crc kubenswrapper[4758]: I0130 09:08:02.773835 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerStarted","Data":"079f8dfa5914c4bc538986134200346f8bd8704f1601fa58efa1d5cb8fa02289"} Jan 30 09:08:03 crc kubenswrapper[4758]: I0130 09:08:03.786423 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerStarted","Data":"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8"} Jan 30 09:08:04 crc kubenswrapper[4758]: I0130 09:08:04.797249 4758 generic.go:334] "Generic (PLEG): container finished" podID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerID="f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8" exitCode=0 Jan 30 09:08:04 crc kubenswrapper[4758]: I0130 09:08:04.797306 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerDied","Data":"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8"} Jan 30 09:08:05 crc kubenswrapper[4758]: I0130 09:08:05.808891 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerStarted","Data":"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d"} Jan 30 09:08:05 crc kubenswrapper[4758]: I0130 09:08:05.829636 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bq4b9" podStartSLOduration=2.095345657 podStartE2EDuration="4.82961035s" podCreationTimestamp="2026-01-30 09:08:01 +0000 UTC" firstStartedPulling="2026-01-30 09:08:02.774998322 +0000 UTC m=+2287.747309873" lastFinishedPulling="2026-01-30 09:08:05.509263015 +0000 UTC m=+2290.481574566" observedRunningTime="2026-01-30 09:08:05.829331751 +0000 UTC m=+2290.801643312" watchObservedRunningTime="2026-01-30 09:08:05.82961035 +0000 UTC m=+2290.801921911" Jan 30 09:08:07 crc kubenswrapper[4758]: I0130 09:08:07.768793 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:07 crc kubenswrapper[4758]: E0130 09:08:07.769564 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.668868 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.669511 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.719795 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.897102 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:11 crc kubenswrapper[4758]: I0130 09:08:11.962803 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:13 crc kubenswrapper[4758]: I0130 09:08:13.869741 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bq4b9" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="registry-server" containerID="cri-o://8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" gracePeriod=2 Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.294372 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.395606 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") pod \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.395755 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") pod \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.395901 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") pod \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\" (UID: \"66af2af4-4490-4d94-b90f-c6c7c64c06b0\") " Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.397028 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities" (OuterVolumeSpecName: "utilities") pod "66af2af4-4490-4d94-b90f-c6c7c64c06b0" (UID: "66af2af4-4490-4d94-b90f-c6c7c64c06b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.404277 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp" (OuterVolumeSpecName: "kube-api-access-dfvsp") pod "66af2af4-4490-4d94-b90f-c6c7c64c06b0" (UID: "66af2af4-4490-4d94-b90f-c6c7c64c06b0"). InnerVolumeSpecName "kube-api-access-dfvsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.417477 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66af2af4-4490-4d94-b90f-c6c7c64c06b0" (UID: "66af2af4-4490-4d94-b90f-c6c7c64c06b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.498317 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfvsp\" (UniqueName: \"kubernetes.io/projected/66af2af4-4490-4d94-b90f-c6c7c64c06b0-kube-api-access-dfvsp\") on node \"crc\" DevicePath \"\"" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.498372 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.498384 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66af2af4-4490-4d94-b90f-c6c7c64c06b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.880964 4758 generic.go:334] "Generic (PLEG): container finished" podID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerID="8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" exitCode=0 Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.881010 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerDied","Data":"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d"} Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.881111 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bq4b9" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.881129 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bq4b9" event={"ID":"66af2af4-4490-4d94-b90f-c6c7c64c06b0","Type":"ContainerDied","Data":"079f8dfa5914c4bc538986134200346f8bd8704f1601fa58efa1d5cb8fa02289"} Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.881153 4758 scope.go:117] "RemoveContainer" containerID="8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.906153 4758 scope.go:117] "RemoveContainer" containerID="f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8" Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.933352 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.953728 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bq4b9"] Jan 30 09:08:14 crc kubenswrapper[4758]: I0130 09:08:14.991111 4758 scope.go:117] "RemoveContainer" containerID="f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.040120 4758 scope.go:117] "RemoveContainer" containerID="8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" Jan 30 09:08:15 crc kubenswrapper[4758]: E0130 09:08:15.041098 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d\": container with ID starting with 8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d not found: ID does not exist" containerID="8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.041152 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d"} err="failed to get container status \"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d\": rpc error: code = NotFound desc = could not find container \"8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d\": container with ID starting with 8e20a69b2fd57a7df17d740aa5ed14cdfccc2b1ce65ab2a00ca183ae8359309d not found: ID does not exist" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.041182 4758 scope.go:117] "RemoveContainer" containerID="f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8" Jan 30 09:08:15 crc kubenswrapper[4758]: E0130 09:08:15.041571 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8\": container with ID starting with f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8 not found: ID does not exist" containerID="f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.041609 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8"} err="failed to get container status \"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8\": rpc error: code = NotFound desc = could not find container \"f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8\": container with ID starting with f5df1ace6b82f2afa504587b38f7eceb405ddf6eaa3d00b90962c78659b658c8 not found: ID does not exist" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.041696 4758 scope.go:117] "RemoveContainer" containerID="f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa" Jan 30 09:08:15 crc kubenswrapper[4758]: E0130 09:08:15.045448 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa\": container with ID starting with f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa not found: ID does not exist" containerID="f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.045493 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa"} err="failed to get container status \"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa\": rpc error: code = NotFound desc = could not find container \"f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa\": container with ID starting with f71283635ba8a6c7ab61f49d81d968a2c4f455ca2398911061006723999956fa not found: ID does not exist" Jan 30 09:08:15 crc kubenswrapper[4758]: I0130 09:08:15.780811 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" path="/var/lib/kubelet/pods/66af2af4-4490-4d94-b90f-c6c7c64c06b0/volumes" Jan 30 09:08:20 crc kubenswrapper[4758]: I0130 09:08:20.768834 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:20 crc kubenswrapper[4758]: E0130 09:08:20.769944 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:34 crc kubenswrapper[4758]: I0130 09:08:34.769333 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:34 crc kubenswrapper[4758]: E0130 09:08:34.771370 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:46 crc kubenswrapper[4758]: I0130 09:08:46.769012 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:46 crc kubenswrapper[4758]: E0130 09:08:46.769837 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.920016 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:08:55 crc kubenswrapper[4758]: E0130 09:08:55.921158 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="extract-content" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.921178 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="extract-content" Jan 30 09:08:55 crc kubenswrapper[4758]: E0130 09:08:55.921220 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="registry-server" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.921229 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="registry-server" Jan 30 09:08:55 crc kubenswrapper[4758]: E0130 09:08:55.921256 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="extract-utilities" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.921265 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="extract-utilities" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.921502 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="66af2af4-4490-4d94-b90f-c6c7c64c06b0" containerName="registry-server" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.923384 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:55 crc kubenswrapper[4758]: I0130 09:08:55.952472 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.052865 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.052963 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.053008 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154277 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154446 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154504 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154798 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.154891 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.183122 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") pod \"community-operators-g5jnd\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.249729 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:08:56 crc kubenswrapper[4758]: I0130 09:08:56.973230 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:08:57 crc kubenswrapper[4758]: I0130 09:08:57.235341 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerStarted","Data":"59d86bb28b62612bf6fc017a25813c662c7653a23aa5fbf6ccfda81731cc8cb8"} Jan 30 09:08:58 crc kubenswrapper[4758]: I0130 09:08:58.244848 4758 generic.go:334] "Generic (PLEG): container finished" podID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerID="b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18" exitCode=0 Jan 30 09:08:58 crc kubenswrapper[4758]: I0130 09:08:58.244934 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerDied","Data":"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18"} Jan 30 09:08:58 crc kubenswrapper[4758]: I0130 09:08:58.246994 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:08:58 crc kubenswrapper[4758]: I0130 09:08:58.768956 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:08:58 crc kubenswrapper[4758]: E0130 09:08:58.769238 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:08:59 crc kubenswrapper[4758]: I0130 09:08:59.257744 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerStarted","Data":"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c"} Jan 30 09:09:01 crc kubenswrapper[4758]: I0130 09:09:01.275622 4758 generic.go:334] "Generic (PLEG): container finished" podID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerID="85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c" exitCode=0 Jan 30 09:09:01 crc kubenswrapper[4758]: I0130 09:09:01.275774 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerDied","Data":"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c"} Jan 30 09:09:02 crc kubenswrapper[4758]: I0130 09:09:02.288640 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerStarted","Data":"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d"} Jan 30 09:09:05 crc kubenswrapper[4758]: I0130 09:09:05.319447 4758 generic.go:334] "Generic (PLEG): container finished" podID="9db7f310-f803-4981-8a55-5d45e9015488" containerID="37ef71ddd39ff51e9fb156dd6e3018baa774422d52cbb49086cb98d66a9bd3e4" exitCode=0 Jan 30 09:09:05 crc kubenswrapper[4758]: I0130 09:09:05.319530 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" event={"ID":"9db7f310-f803-4981-8a55-5d45e9015488","Type":"ContainerDied","Data":"37ef71ddd39ff51e9fb156dd6e3018baa774422d52cbb49086cb98d66a9bd3e4"} Jan 30 09:09:05 crc kubenswrapper[4758]: I0130 09:09:05.352284 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g5jnd" podStartSLOduration=6.903151804 podStartE2EDuration="10.352259311s" podCreationTimestamp="2026-01-30 09:08:55 +0000 UTC" firstStartedPulling="2026-01-30 09:08:58.246743858 +0000 UTC m=+2343.219055409" lastFinishedPulling="2026-01-30 09:09:01.695851365 +0000 UTC m=+2346.668162916" observedRunningTime="2026-01-30 09:09:02.324979764 +0000 UTC m=+2347.297291325" watchObservedRunningTime="2026-01-30 09:09:05.352259311 +0000 UTC m=+2350.324570862" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.249863 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.250274 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.303512 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.376990 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.583353 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.777558 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.953983 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.954058 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.954140 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.954176 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.954278 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") pod \"9db7f310-f803-4981-8a55-5d45e9015488\" (UID: \"9db7f310-f803-4981-8a55-5d45e9015488\") " Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.969308 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg" (OuterVolumeSpecName: "kube-api-access-flnhg") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "kube-api-access-flnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.969444 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.986921 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.990116 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:09:06 crc kubenswrapper[4758]: I0130 09:09:06.994242 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory" (OuterVolumeSpecName: "inventory") pod "9db7f310-f803-4981-8a55-5d45e9015488" (UID: "9db7f310-f803-4981-8a55-5d45e9015488"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056386 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056431 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flnhg\" (UniqueName: \"kubernetes.io/projected/9db7f310-f803-4981-8a55-5d45e9015488-kube-api-access-flnhg\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056444 4758 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9db7f310-f803-4981-8a55-5d45e9015488-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056456 4758 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.056469 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9db7f310-f803-4981-8a55-5d45e9015488-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.352754 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.355383 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fv46l" event={"ID":"9db7f310-f803-4981-8a55-5d45e9015488","Type":"ContainerDied","Data":"43854d2e550b3a157e48ef29ea2036779fa7f76a1d19050140b07be00b088424"} Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.355468 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43854d2e550b3a157e48ef29ea2036779fa7f76a1d19050140b07be00b088424" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.452109 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr"] Jan 30 09:09:07 crc kubenswrapper[4758]: E0130 09:09:07.452622 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db7f310-f803-4981-8a55-5d45e9015488" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.452637 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db7f310-f803-4981-8a55-5d45e9015488" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.452893 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db7f310-f803-4981-8a55-5d45e9015488" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.453745 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.466720 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.466719 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.466920 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.467000 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.467096 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.467274 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.495009 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr"] Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585307 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585395 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585420 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585533 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585572 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.585614 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687664 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687722 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687816 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687857 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687879 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.687929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.697121 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.700997 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.701747 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.705585 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.706168 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.707754 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:07 crc kubenswrapper[4758]: I0130 09:09:07.785754 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.344637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr"] Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.358214 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g5jnd" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="registry-server" containerID="cri-o://6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" gracePeriod=2 Jan 30 09:09:08 crc kubenswrapper[4758]: W0130 09:09:08.359360 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode377cd96_b016_4154_bae7_fa61f9be7472.slice/crio-f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d WatchSource:0}: Error finding container f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d: Status 404 returned error can't find the container with id f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.766397 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.917077 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") pod \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.917306 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") pod \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.917394 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") pod \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\" (UID: \"66507aad-a92a-4bc1-9ebf-0fc2de6b293c\") " Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.919193 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities" (OuterVolumeSpecName: "utilities") pod "66507aad-a92a-4bc1-9ebf-0fc2de6b293c" (UID: "66507aad-a92a-4bc1-9ebf-0fc2de6b293c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.921350 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg" (OuterVolumeSpecName: "kube-api-access-swsbg") pod "66507aad-a92a-4bc1-9ebf-0fc2de6b293c" (UID: "66507aad-a92a-4bc1-9ebf-0fc2de6b293c"). InnerVolumeSpecName "kube-api-access-swsbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:09:08 crc kubenswrapper[4758]: I0130 09:09:08.989264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66507aad-a92a-4bc1-9ebf-0fc2de6b293c" (UID: "66507aad-a92a-4bc1-9ebf-0fc2de6b293c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.019945 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.019985 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.019997 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swsbg\" (UniqueName: \"kubernetes.io/projected/66507aad-a92a-4bc1-9ebf-0fc2de6b293c-kube-api-access-swsbg\") on node \"crc\" DevicePath \"\"" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.368915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" event={"ID":"e377cd96-b016-4154-bae7-fa61f9be7472","Type":"ContainerStarted","Data":"c57b9a11619b38b44cfb1cae3d8aab457c2a899787e7825908635f30c84cebc8"} Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.369279 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" event={"ID":"e377cd96-b016-4154-bae7-fa61f9be7472","Type":"ContainerStarted","Data":"f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d"} Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.371980 4758 generic.go:334] "Generic (PLEG): container finished" podID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerID="6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" exitCode=0 Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.372016 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerDied","Data":"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d"} Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.372059 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jnd" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.372074 4758 scope.go:117] "RemoveContainer" containerID="6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.372061 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jnd" event={"ID":"66507aad-a92a-4bc1-9ebf-0fc2de6b293c","Type":"ContainerDied","Data":"59d86bb28b62612bf6fc017a25813c662c7653a23aa5fbf6ccfda81731cc8cb8"} Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.388274 4758 scope.go:117] "RemoveContainer" containerID="85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.391789 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" podStartSLOduration=1.934603492 podStartE2EDuration="2.391762861s" podCreationTimestamp="2026-01-30 09:09:07 +0000 UTC" firstStartedPulling="2026-01-30 09:09:08.364569183 +0000 UTC m=+2353.336880734" lastFinishedPulling="2026-01-30 09:09:08.821728562 +0000 UTC m=+2353.794040103" observedRunningTime="2026-01-30 09:09:09.388553342 +0000 UTC m=+2354.360864903" watchObservedRunningTime="2026-01-30 09:09:09.391762861 +0000 UTC m=+2354.364074422" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.413889 4758 scope.go:117] "RemoveContainer" containerID="b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.461357 4758 scope.go:117] "RemoveContainer" containerID="6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" Jan 30 09:09:09 crc kubenswrapper[4758]: E0130 09:09:09.462072 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d\": container with ID starting with 6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d not found: ID does not exist" containerID="6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462116 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d"} err="failed to get container status \"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d\": rpc error: code = NotFound desc = could not find container \"6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d\": container with ID starting with 6d1056ec97b88e446019756085860009344bffc221b3b166f70dc1ffdc0c989d not found: ID does not exist" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462145 4758 scope.go:117] "RemoveContainer" containerID="85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c" Jan 30 09:09:09 crc kubenswrapper[4758]: E0130 09:09:09.462452 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c\": container with ID starting with 85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c not found: ID does not exist" containerID="85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462488 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c"} err="failed to get container status \"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c\": rpc error: code = NotFound desc = could not find container \"85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c\": container with ID starting with 85df41fa9ae9c8137cb2a8600a7af71bf78afa4bde1214e5cab28d9c63d2a21c not found: ID does not exist" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462506 4758 scope.go:117] "RemoveContainer" containerID="b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18" Jan 30 09:09:09 crc kubenswrapper[4758]: E0130 09:09:09.462858 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18\": container with ID starting with b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18 not found: ID does not exist" containerID="b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.462884 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18"} err="failed to get container status \"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18\": rpc error: code = NotFound desc = could not find container \"b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18\": container with ID starting with b29c6f2cdd6a52287bc7ed9b9ee039005c932943d62090aad0b25a9acd07cd18 not found: ID does not exist" Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.469926 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.478706 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g5jnd"] Jan 30 09:09:09 crc kubenswrapper[4758]: I0130 09:09:09.777633 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" path="/var/lib/kubelet/pods/66507aad-a92a-4bc1-9ebf-0fc2de6b293c/volumes" Jan 30 09:09:10 crc kubenswrapper[4758]: I0130 09:09:10.768848 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:10 crc kubenswrapper[4758]: E0130 09:09:10.769211 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:23 crc kubenswrapper[4758]: I0130 09:09:23.769563 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:23 crc kubenswrapper[4758]: E0130 09:09:23.770422 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:35 crc kubenswrapper[4758]: I0130 09:09:35.775940 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:35 crc kubenswrapper[4758]: E0130 09:09:35.776858 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:48 crc kubenswrapper[4758]: I0130 09:09:48.769512 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:48 crc kubenswrapper[4758]: E0130 09:09:48.770394 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:59 crc kubenswrapper[4758]: I0130 09:09:59.768563 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:09:59 crc kubenswrapper[4758]: E0130 09:09:59.769293 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:09:59 crc kubenswrapper[4758]: I0130 09:09:59.797224 4758 generic.go:334] "Generic (PLEG): container finished" podID="e377cd96-b016-4154-bae7-fa61f9be7472" containerID="c57b9a11619b38b44cfb1cae3d8aab457c2a899787e7825908635f30c84cebc8" exitCode=0 Jan 30 09:09:59 crc kubenswrapper[4758]: I0130 09:09:59.797603 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" event={"ID":"e377cd96-b016-4154-bae7-fa61f9be7472","Type":"ContainerDied","Data":"c57b9a11619b38b44cfb1cae3d8aab457c2a899787e7825908635f30c84cebc8"} Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.234908 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286594 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286653 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286678 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286717 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286747 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.286817 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") pod \"e377cd96-b016-4154-bae7-fa61f9be7472\" (UID: \"e377cd96-b016-4154-bae7-fa61f9be7472\") " Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.294640 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd" (OuterVolumeSpecName: "kube-api-access-qdtkd") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "kube-api-access-qdtkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.307452 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.320482 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory" (OuterVolumeSpecName: "inventory") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.322370 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.328198 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.335873 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e377cd96-b016-4154-bae7-fa61f9be7472" (UID: "e377cd96-b016-4154-bae7-fa61f9be7472"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389213 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389246 4758 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389258 4758 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389270 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389279 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtkd\" (UniqueName: \"kubernetes.io/projected/e377cd96-b016-4154-bae7-fa61f9be7472-kube-api-access-qdtkd\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.389287 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e377cd96-b016-4154-bae7-fa61f9be7472-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.818806 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" event={"ID":"e377cd96-b016-4154-bae7-fa61f9be7472","Type":"ContainerDied","Data":"f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d"} Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.819415 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9cd15aa091f5a79493fb7fb709bf927a05591981e7792bba85e90f9db43cb8d" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.819104 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.912827 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv"] Jan 30 09:10:01 crc kubenswrapper[4758]: E0130 09:10:01.913443 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="extract-utilities" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913492 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="extract-utilities" Jan 30 09:10:01 crc kubenswrapper[4758]: E0130 09:10:01.913525 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e377cd96-b016-4154-bae7-fa61f9be7472" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913537 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e377cd96-b016-4154-bae7-fa61f9be7472" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 09:10:01 crc kubenswrapper[4758]: E0130 09:10:01.913568 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="registry-server" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913576 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="registry-server" Jan 30 09:10:01 crc kubenswrapper[4758]: E0130 09:10:01.913594 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="extract-content" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913603 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="extract-content" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913827 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="66507aad-a92a-4bc1-9ebf-0fc2de6b293c" containerName="registry-server" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.913860 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e377cd96-b016-4154-bae7-fa61f9be7472" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.914813 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.916987 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.917815 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.918128 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.918493 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.918814 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:10:01 crc kubenswrapper[4758]: I0130 09:10:01.933420 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv"] Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.003899 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.004238 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.004382 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.004551 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.004662 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.105972 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.106032 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.106105 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.106128 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.106157 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.111161 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.111681 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.111752 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.111775 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.133660 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.239398 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.602676 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv"] Jan 30 09:10:02 crc kubenswrapper[4758]: I0130 09:10:02.828573 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" event={"ID":"35efb0cc-1bf4-4052-af18-b206ea052f80","Type":"ContainerStarted","Data":"30133c0490ee20fac56a6741bd65b03b57f8a87d72d29e53c79df6b9f00067cc"} Jan 30 09:10:03 crc kubenswrapper[4758]: I0130 09:10:03.837778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" event={"ID":"35efb0cc-1bf4-4052-af18-b206ea052f80","Type":"ContainerStarted","Data":"86ec62d0d5fcf1e8a74a7d83e79533755a8f09adadadc64e5d1d41a69ce36daf"} Jan 30 09:10:03 crc kubenswrapper[4758]: I0130 09:10:03.866752 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" podStartSLOduration=2.409331427 podStartE2EDuration="2.866734199s" podCreationTimestamp="2026-01-30 09:10:01 +0000 UTC" firstStartedPulling="2026-01-30 09:10:02.62925752 +0000 UTC m=+2407.601569071" lastFinishedPulling="2026-01-30 09:10:03.086660292 +0000 UTC m=+2408.058971843" observedRunningTime="2026-01-30 09:10:03.860397951 +0000 UTC m=+2408.832709512" watchObservedRunningTime="2026-01-30 09:10:03.866734199 +0000 UTC m=+2408.839045750" Jan 30 09:10:13 crc kubenswrapper[4758]: I0130 09:10:13.769397 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:10:13 crc kubenswrapper[4758]: E0130 09:10:13.770545 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:10:25 crc kubenswrapper[4758]: I0130 09:10:25.768760 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:10:25 crc kubenswrapper[4758]: E0130 09:10:25.769695 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:10:40 crc kubenswrapper[4758]: I0130 09:10:40.768344 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:10:40 crc kubenswrapper[4758]: E0130 09:10:40.768954 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:10:55 crc kubenswrapper[4758]: I0130 09:10:55.775453 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:10:55 crc kubenswrapper[4758]: E0130 09:10:55.776224 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:11:07 crc kubenswrapper[4758]: I0130 09:11:07.769189 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:11:07 crc kubenswrapper[4758]: E0130 09:11:07.769930 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:11:22 crc kubenswrapper[4758]: I0130 09:11:22.769316 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:11:23 crc kubenswrapper[4758]: I0130 09:11:23.493804 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7"} Jan 30 09:13:22 crc kubenswrapper[4758]: I0130 09:13:22.386933 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:13:22 crc kubenswrapper[4758]: I0130 09:13:22.387658 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:13:52 crc kubenswrapper[4758]: I0130 09:13:52.387207 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:13:52 crc kubenswrapper[4758]: I0130 09:13:52.387835 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:14:14 crc kubenswrapper[4758]: I0130 09:14:14.952236 4758 generic.go:334] "Generic (PLEG): container finished" podID="35efb0cc-1bf4-4052-af18-b206ea052f80" containerID="86ec62d0d5fcf1e8a74a7d83e79533755a8f09adadadc64e5d1d41a69ce36daf" exitCode=0 Jan 30 09:14:14 crc kubenswrapper[4758]: I0130 09:14:14.952319 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" event={"ID":"35efb0cc-1bf4-4052-af18-b206ea052f80","Type":"ContainerDied","Data":"86ec62d0d5fcf1e8a74a7d83e79533755a8f09adadadc64e5d1d41a69ce36daf"} Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.421968 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459273 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459368 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459416 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459556 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.459574 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") pod \"35efb0cc-1bf4-4052-af18-b206ea052f80\" (UID: \"35efb0cc-1bf4-4052-af18-b206ea052f80\") " Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.467523 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.468651 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp" (OuterVolumeSpecName: "kube-api-access-bsfgp") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "kube-api-access-bsfgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.501291 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory" (OuterVolumeSpecName: "inventory") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.515223 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.525260 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "35efb0cc-1bf4-4052-af18-b206ea052f80" (UID: "35efb0cc-1bf4-4052-af18-b206ea052f80"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.560979 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.561012 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.561024 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.561048 4758 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/35efb0cc-1bf4-4052-af18-b206ea052f80-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.561057 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsfgp\" (UniqueName: \"kubernetes.io/projected/35efb0cc-1bf4-4052-af18-b206ea052f80-kube-api-access-bsfgp\") on node \"crc\" DevicePath \"\"" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.970614 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" event={"ID":"35efb0cc-1bf4-4052-af18-b206ea052f80","Type":"ContainerDied","Data":"30133c0490ee20fac56a6741bd65b03b57f8a87d72d29e53c79df6b9f00067cc"} Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.970659 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30133c0490ee20fac56a6741bd65b03b57f8a87d72d29e53c79df6b9f00067cc" Jan 30 09:14:16 crc kubenswrapper[4758]: I0130 09:14:16.970674 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.144803 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z"] Jan 30 09:14:17 crc kubenswrapper[4758]: E0130 09:14:17.145236 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35efb0cc-1bf4-4052-af18-b206ea052f80" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.145252 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="35efb0cc-1bf4-4052-af18-b206ea052f80" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.145441 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="35efb0cc-1bf4-4052-af18-b206ea052f80" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.146056 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.148652 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.148820 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.153125 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.153204 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.153261 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.153453 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.162425 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.169948 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z"] Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171398 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171459 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171501 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171601 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171685 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171727 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171769 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.171795 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274341 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274436 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274480 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274549 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274581 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274604 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274652 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.274681 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.278688 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.278876 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.279089 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.279206 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.280001 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.281222 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.281929 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.283690 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.293977 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") pod \"nova-edpm-deployment-openstack-edpm-ipam-vfn7z\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.474278 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.981314 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z"] Jan 30 09:14:17 crc kubenswrapper[4758]: I0130 09:14:17.991351 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:14:18 crc kubenswrapper[4758]: I0130 09:14:18.992957 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" event={"ID":"48c9d5d6-6dc5-4848-bade-3c302106b074","Type":"ContainerStarted","Data":"7d5419d178211565e085acf1cdb2716a8786ff13a1a1ac8b9c2a26dd960a1501"} Jan 30 09:14:20 crc kubenswrapper[4758]: I0130 09:14:20.004623 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" event={"ID":"48c9d5d6-6dc5-4848-bade-3c302106b074","Type":"ContainerStarted","Data":"1d79080e887299522bedac745c89bf263e53d4b14d66aaebf1cebb95d50114f0"} Jan 30 09:14:20 crc kubenswrapper[4758]: I0130 09:14:20.066822 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" podStartSLOduration=2.050482201 podStartE2EDuration="3.066792321s" podCreationTimestamp="2026-01-30 09:14:17 +0000 UTC" firstStartedPulling="2026-01-30 09:14:17.991151937 +0000 UTC m=+2662.963463488" lastFinishedPulling="2026-01-30 09:14:19.007462057 +0000 UTC m=+2663.979773608" observedRunningTime="2026-01-30 09:14:20.060291936 +0000 UTC m=+2665.032603487" watchObservedRunningTime="2026-01-30 09:14:20.066792321 +0000 UTC m=+2665.039103872" Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.387670 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.388060 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.388118 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.388986 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:14:22 crc kubenswrapper[4758]: I0130 09:14:22.389067 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7" gracePeriod=600 Jan 30 09:14:23 crc kubenswrapper[4758]: I0130 09:14:23.029762 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7" exitCode=0 Jan 30 09:14:23 crc kubenswrapper[4758]: I0130 09:14:23.029825 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7"} Jan 30 09:14:23 crc kubenswrapper[4758]: I0130 09:14:23.030520 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202"} Jan 30 09:14:23 crc kubenswrapper[4758]: I0130 09:14:23.030562 4758 scope.go:117] "RemoveContainer" containerID="bc3a3cd3217f509cf729ac03ee26df9c23822dad6c7477133b897ec1510fbc41" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.153353 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.156571 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.167021 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.167322 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.171474 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.193595 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.193664 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.193706 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.295998 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.296128 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.296169 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.296960 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.304637 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.315555 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") pod \"collect-profiles-29496075-xcd84\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.477669 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:00 crc kubenswrapper[4758]: I0130 09:15:00.927500 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 09:15:01 crc kubenswrapper[4758]: I0130 09:15:01.340241 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" event={"ID":"31c0d7e9-81d5-4f36-bc9e-56e22d853f85","Type":"ContainerStarted","Data":"32d29d08b6aebab1b513110ea8db75577887cfa1b8e5f32a8aaa6014efa7ad2d"} Jan 30 09:15:01 crc kubenswrapper[4758]: I0130 09:15:01.340616 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" event={"ID":"31c0d7e9-81d5-4f36-bc9e-56e22d853f85","Type":"ContainerStarted","Data":"7b796af45ea35cdc61b7e6ab04e9005acf14c49e11c349f5312e3103a2deca8a"} Jan 30 09:15:01 crc kubenswrapper[4758]: I0130 09:15:01.370761 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" podStartSLOduration=1.370612549 podStartE2EDuration="1.370612549s" podCreationTimestamp="2026-01-30 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:15:01.35631238 +0000 UTC m=+2706.328623951" watchObservedRunningTime="2026-01-30 09:15:01.370612549 +0000 UTC m=+2706.342924100" Jan 30 09:15:02 crc kubenswrapper[4758]: I0130 09:15:02.351563 4758 generic.go:334] "Generic (PLEG): container finished" podID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" containerID="32d29d08b6aebab1b513110ea8db75577887cfa1b8e5f32a8aaa6014efa7ad2d" exitCode=0 Jan 30 09:15:02 crc kubenswrapper[4758]: I0130 09:15:02.353646 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" event={"ID":"31c0d7e9-81d5-4f36-bc9e-56e22d853f85","Type":"ContainerDied","Data":"32d29d08b6aebab1b513110ea8db75577887cfa1b8e5f32a8aaa6014efa7ad2d"} Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.672312 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.770130 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") pod \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.770292 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") pod \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.770953 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume" (OuterVolumeSpecName: "config-volume") pod "31c0d7e9-81d5-4f36-bc9e-56e22d853f85" (UID: "31c0d7e9-81d5-4f36-bc9e-56e22d853f85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.771452 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") pod \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\" (UID: \"31c0d7e9-81d5-4f36-bc9e-56e22d853f85\") " Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.772298 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.776181 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "31c0d7e9-81d5-4f36-bc9e-56e22d853f85" (UID: "31c0d7e9-81d5-4f36-bc9e-56e22d853f85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.777440 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25" (OuterVolumeSpecName: "kube-api-access-jlx25") pod "31c0d7e9-81d5-4f36-bc9e-56e22d853f85" (UID: "31c0d7e9-81d5-4f36-bc9e-56e22d853f85"). InnerVolumeSpecName "kube-api-access-jlx25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.874307 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlx25\" (UniqueName: \"kubernetes.io/projected/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-kube-api-access-jlx25\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:03 crc kubenswrapper[4758]: I0130 09:15:03.874337 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31c0d7e9-81d5-4f36-bc9e-56e22d853f85-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.369905 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" event={"ID":"31c0d7e9-81d5-4f36-bc9e-56e22d853f85","Type":"ContainerDied","Data":"7b796af45ea35cdc61b7e6ab04e9005acf14c49e11c349f5312e3103a2deca8a"} Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.370300 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b796af45ea35cdc61b7e6ab04e9005acf14c49e11c349f5312e3103a2deca8a" Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.370129 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84" Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.437315 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 09:15:04 crc kubenswrapper[4758]: I0130 09:15:04.446852 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496030-jndtz"] Jan 30 09:15:05 crc kubenswrapper[4758]: I0130 09:15:05.782719 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9313ed67-0218-4d32-adf7-710ba67de622" path="/var/lib/kubelet/pods/9313ed67-0218-4d32-adf7-710ba67de622/volumes" Jan 30 09:15:10 crc kubenswrapper[4758]: I0130 09:15:10.214687 4758 scope.go:117] "RemoveContainer" containerID="166bda0445ccb2dd7e9715331b62d1e0995dd20e40dce82e327a772dd4e0caf9" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.822836 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:20 crc kubenswrapper[4758]: E0130 09:15:20.823747 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" containerName="collect-profiles" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.823761 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" containerName="collect-profiles" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.823982 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" containerName="collect-profiles" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.826565 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:20 crc kubenswrapper[4758]: I0130 09:15:20.852362 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.020429 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.020473 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.020653 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.122269 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.122318 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.122408 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.122937 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.124112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.143536 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") pod \"redhat-operators-7xp27\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.149301 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:21 crc kubenswrapper[4758]: I0130 09:15:21.663557 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:22 crc kubenswrapper[4758]: I0130 09:15:22.532604 4758 generic.go:334] "Generic (PLEG): container finished" podID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerID="44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6" exitCode=0 Jan 30 09:15:22 crc kubenswrapper[4758]: I0130 09:15:22.532706 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerDied","Data":"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6"} Jan 30 09:15:22 crc kubenswrapper[4758]: I0130 09:15:22.534551 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerStarted","Data":"fe42b9a116c8937d09bddab48d970d2230399a89728a7a74032adb1a823c994d"} Jan 30 09:15:23 crc kubenswrapper[4758]: I0130 09:15:23.544679 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerStarted","Data":"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3"} Jan 30 09:15:29 crc kubenswrapper[4758]: I0130 09:15:29.605231 4758 generic.go:334] "Generic (PLEG): container finished" podID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerID="402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3" exitCode=0 Jan 30 09:15:29 crc kubenswrapper[4758]: I0130 09:15:29.605284 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerDied","Data":"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3"} Jan 30 09:15:30 crc kubenswrapper[4758]: I0130 09:15:30.616573 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerStarted","Data":"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2"} Jan 30 09:15:30 crc kubenswrapper[4758]: I0130 09:15:30.650388 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7xp27" podStartSLOduration=3.151292853 podStartE2EDuration="10.649721462s" podCreationTimestamp="2026-01-30 09:15:20 +0000 UTC" firstStartedPulling="2026-01-30 09:15:22.534535575 +0000 UTC m=+2727.506847126" lastFinishedPulling="2026-01-30 09:15:30.032964184 +0000 UTC m=+2735.005275735" observedRunningTime="2026-01-30 09:15:30.638798728 +0000 UTC m=+2735.611110289" watchObservedRunningTime="2026-01-30 09:15:30.649721462 +0000 UTC m=+2735.622033013" Jan 30 09:15:31 crc kubenswrapper[4758]: I0130 09:15:31.149485 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:31 crc kubenswrapper[4758]: I0130 09:15:31.149580 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:32 crc kubenswrapper[4758]: I0130 09:15:32.204490 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7xp27" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" probeResult="failure" output=< Jan 30 09:15:32 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:15:32 crc kubenswrapper[4758]: > Jan 30 09:15:41 crc kubenswrapper[4758]: I0130 09:15:41.205284 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:41 crc kubenswrapper[4758]: I0130 09:15:41.265681 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:41 crc kubenswrapper[4758]: I0130 09:15:41.784035 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:42 crc kubenswrapper[4758]: I0130 09:15:42.726167 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7xp27" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" containerID="cri-o://9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" gracePeriod=2 Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.184291 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.360271 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") pod \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.360579 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") pod \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.360696 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") pod \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\" (UID: \"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe\") " Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.361361 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities" (OuterVolumeSpecName: "utilities") pod "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" (UID: "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.368235 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8" (OuterVolumeSpecName: "kube-api-access-jntk8") pod "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" (UID: "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe"). InnerVolumeSpecName "kube-api-access-jntk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.462858 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.462885 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jntk8\" (UniqueName: \"kubernetes.io/projected/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-kube-api-access-jntk8\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.489638 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" (UID: "97c54792-a1ff-4ad3-bcf5-9c2562b25dfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.565658 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.737885 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerDied","Data":"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2"} Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.738024 4758 scope.go:117] "RemoveContainer" containerID="9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.737965 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7xp27" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.737816 4758 generic.go:334] "Generic (PLEG): container finished" podID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerID="9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" exitCode=0 Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.738795 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7xp27" event={"ID":"97c54792-a1ff-4ad3-bcf5-9c2562b25dfe","Type":"ContainerDied","Data":"fe42b9a116c8937d09bddab48d970d2230399a89728a7a74032adb1a823c994d"} Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.764748 4758 scope.go:117] "RemoveContainer" containerID="402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.792111 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.792339 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7xp27"] Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.801988 4758 scope.go:117] "RemoveContainer" containerID="44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.846306 4758 scope.go:117] "RemoveContainer" containerID="9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" Jan 30 09:15:43 crc kubenswrapper[4758]: E0130 09:15:43.846654 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2\": container with ID starting with 9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2 not found: ID does not exist" containerID="9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.846692 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2"} err="failed to get container status \"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2\": rpc error: code = NotFound desc = could not find container \"9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2\": container with ID starting with 9efdefcdab50e89e8a525b5cfb522abfe981d9984170cdbc13f5b9f1f61df8e2 not found: ID does not exist" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.846717 4758 scope.go:117] "RemoveContainer" containerID="402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3" Jan 30 09:15:43 crc kubenswrapper[4758]: E0130 09:15:43.847096 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3\": container with ID starting with 402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3 not found: ID does not exist" containerID="402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.847162 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3"} err="failed to get container status \"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3\": rpc error: code = NotFound desc = could not find container \"402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3\": container with ID starting with 402f6a84593411afe8902c56a63154857310d3a6930f5fc68c9e6fde1aea39a3 not found: ID does not exist" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.847302 4758 scope.go:117] "RemoveContainer" containerID="44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6" Jan 30 09:15:43 crc kubenswrapper[4758]: E0130 09:15:43.847656 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6\": container with ID starting with 44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6 not found: ID does not exist" containerID="44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6" Jan 30 09:15:43 crc kubenswrapper[4758]: I0130 09:15:43.847682 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6"} err="failed to get container status \"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6\": rpc error: code = NotFound desc = could not find container \"44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6\": container with ID starting with 44b750c1ebd76031f5906ae6e13ca69d1e82b2b7f996e9b3467ef0dd1dd2d6f6 not found: ID does not exist" Jan 30 09:15:45 crc kubenswrapper[4758]: I0130 09:15:45.779651 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" path="/var/lib/kubelet/pods/97c54792-a1ff-4ad3-bcf5-9c2562b25dfe/volumes" Jan 30 09:16:22 crc kubenswrapper[4758]: I0130 09:16:22.387641 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:16:22 crc kubenswrapper[4758]: I0130 09:16:22.388154 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:16:52 crc kubenswrapper[4758]: I0130 09:16:52.387242 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:16:52 crc kubenswrapper[4758]: I0130 09:16:52.387797 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:16:53 crc kubenswrapper[4758]: I0130 09:16:53.306542 4758 generic.go:334] "Generic (PLEG): container finished" podID="48c9d5d6-6dc5-4848-bade-3c302106b074" containerID="1d79080e887299522bedac745c89bf263e53d4b14d66aaebf1cebb95d50114f0" exitCode=0 Jan 30 09:16:53 crc kubenswrapper[4758]: I0130 09:16:53.306588 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" event={"ID":"48c9d5d6-6dc5-4848-bade-3c302106b074","Type":"ContainerDied","Data":"1d79080e887299522bedac745c89bf263e53d4b14d66aaebf1cebb95d50114f0"} Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.737972 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758089 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758233 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758436 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758488 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758541 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758585 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758726 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.758994 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.759324 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") pod \"48c9d5d6-6dc5-4848-bade-3c302106b074\" (UID: \"48c9d5d6-6dc5-4848-bade-3c302106b074\") " Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.783934 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.789977 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.792744 4758 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.796181 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v" (OuterVolumeSpecName: "kube-api-access-4cq5v") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "kube-api-access-4cq5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.830796 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.835227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.838916 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.865470 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.868075 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.873822 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory" (OuterVolumeSpecName: "inventory") pod "48c9d5d6-6dc5-4848-bade-3c302106b074" (UID: "48c9d5d6-6dc5-4848-bade-3c302106b074"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894281 4758 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894317 4758 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894327 4758 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894335 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cq5v\" (UniqueName: \"kubernetes.io/projected/48c9d5d6-6dc5-4848-bade-3c302106b074-kube-api-access-4cq5v\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894353 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894363 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894373 4758 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:54 crc kubenswrapper[4758]: I0130 09:16:54.894381 4758 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/48c9d5d6-6dc5-4848-bade-3c302106b074-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.323397 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" event={"ID":"48c9d5d6-6dc5-4848-bade-3c302106b074","Type":"ContainerDied","Data":"7d5419d178211565e085acf1cdb2716a8786ff13a1a1ac8b9c2a26dd960a1501"} Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.323442 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d5419d178211565e085acf1cdb2716a8786ff13a1a1ac8b9c2a26dd960a1501" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.323494 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-vfn7z" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439233 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn"] Jan 30 09:16:55 crc kubenswrapper[4758]: E0130 09:16:55.439628 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c9d5d6-6dc5-4848-bade-3c302106b074" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439646 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c9d5d6-6dc5-4848-bade-3c302106b074" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 09:16:55 crc kubenswrapper[4758]: E0130 09:16:55.439660 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="extract-content" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439667 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="extract-content" Jan 30 09:16:55 crc kubenswrapper[4758]: E0130 09:16:55.439675 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="extract-utilities" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439681 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="extract-utilities" Jan 30 09:16:55 crc kubenswrapper[4758]: E0130 09:16:55.439688 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439694 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439865 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c9d5d6-6dc5-4848-bade-3c302106b074" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.439886 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c54792-a1ff-4ad3-bcf5-9c2562b25dfe" containerName="registry-server" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.440497 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448508 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448585 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448638 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zmdqq" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448809 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.448855 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.451155 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn"] Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503578 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503687 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503712 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503744 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503763 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503803 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.503833 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605276 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605344 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605395 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605423 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605483 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605518 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.605581 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.609408 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.609595 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.609917 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.610217 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.611161 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.614168 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.622352 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:55 crc kubenswrapper[4758]: I0130 09:16:55.763673 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:16:56 crc kubenswrapper[4758]: I0130 09:16:56.283215 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn"] Jan 30 09:16:56 crc kubenswrapper[4758]: I0130 09:16:56.337412 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" event={"ID":"38932896-a566-4440-b672-33909cb638b0","Type":"ContainerStarted","Data":"4ad968b66da815adc412e79bec231c9a8b88fe94c19e206b409cd68c1ca8ab01"} Jan 30 09:16:56 crc kubenswrapper[4758]: I0130 09:16:56.732368 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 09:16:57 crc kubenswrapper[4758]: I0130 09:16:57.348196 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" event={"ID":"38932896-a566-4440-b672-33909cb638b0","Type":"ContainerStarted","Data":"eb7433698d4689365258f066619568b60512753789e2f4616d12bb0530207c37"} Jan 30 09:16:57 crc kubenswrapper[4758]: I0130 09:16:57.369354 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" podStartSLOduration=1.9226421820000001 podStartE2EDuration="2.369333689s" podCreationTimestamp="2026-01-30 09:16:55 +0000 UTC" firstStartedPulling="2026-01-30 09:16:56.282434237 +0000 UTC m=+2821.254745788" lastFinishedPulling="2026-01-30 09:16:56.729125744 +0000 UTC m=+2821.701437295" observedRunningTime="2026-01-30 09:16:57.36620413 +0000 UTC m=+2822.338515681" watchObservedRunningTime="2026-01-30 09:16:57.369333689 +0000 UTC m=+2822.341645240" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.387148 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.388567 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.388699 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.389559 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.389729 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" gracePeriod=600 Jan 30 09:17:22 crc kubenswrapper[4758]: E0130 09:17:22.517749 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.597563 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" exitCode=0 Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.597608 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202"} Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.597668 4758 scope.go:117] "RemoveContainer" containerID="2cc215bdce1ca5af9f8c68a84b985e34df78173fce9f59d165b301292c9ca0d7" Jan 30 09:17:22 crc kubenswrapper[4758]: I0130 09:17:22.598513 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:17:22 crc kubenswrapper[4758]: E0130 09:17:22.598902 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:17:32 crc kubenswrapper[4758]: I0130 09:17:32.769947 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:17:32 crc kubenswrapper[4758]: E0130 09:17:32.770703 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:17:45 crc kubenswrapper[4758]: I0130 09:17:45.775337 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:17:45 crc kubenswrapper[4758]: E0130 09:17:45.776095 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:17:58 crc kubenswrapper[4758]: I0130 09:17:58.769904 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:17:58 crc kubenswrapper[4758]: E0130 09:17:58.771022 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.310208 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.313307 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.329859 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.412368 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.412545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.412594 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.513715 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.513864 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.513904 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.514242 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.514442 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.534821 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") pod \"redhat-marketplace-spvmr\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:04 crc kubenswrapper[4758]: I0130 09:18:04.645347 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:05 crc kubenswrapper[4758]: I0130 09:18:05.176852 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:05 crc kubenswrapper[4758]: I0130 09:18:05.976564 4758 generic.go:334] "Generic (PLEG): container finished" podID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerID="c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f" exitCode=0 Jan 30 09:18:05 crc kubenswrapper[4758]: I0130 09:18:05.976609 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerDied","Data":"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f"} Jan 30 09:18:05 crc kubenswrapper[4758]: I0130 09:18:05.976635 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerStarted","Data":"1493bc131fe2a14d48215a5096bfd60d6949daca73eab63a5d511884ee5335cb"} Jan 30 09:18:06 crc kubenswrapper[4758]: I0130 09:18:06.986975 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerStarted","Data":"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11"} Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.086366 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.088807 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.098855 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.189032 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.189371 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.189418 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.291329 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.291388 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.291440 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.292264 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.292356 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.315125 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") pod \"certified-operators-s5xsj\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.407414 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:07 crc kubenswrapper[4758]: I0130 09:18:07.949848 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:08 crc kubenswrapper[4758]: I0130 09:18:08.010451 4758 generic.go:334] "Generic (PLEG): container finished" podID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerID="7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11" exitCode=0 Jan 30 09:18:08 crc kubenswrapper[4758]: I0130 09:18:08.010570 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerDied","Data":"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11"} Jan 30 09:18:08 crc kubenswrapper[4758]: I0130 09:18:08.018969 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerStarted","Data":"7cf806a302c6f18474a3bf8181f8ad7f7a77bc8704b82b5808c7dbe7f1566ffc"} Jan 30 09:18:09 crc kubenswrapper[4758]: I0130 09:18:09.029235 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerStarted","Data":"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5"} Jan 30 09:18:09 crc kubenswrapper[4758]: I0130 09:18:09.030972 4758 generic.go:334] "Generic (PLEG): container finished" podID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerID="041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032" exitCode=0 Jan 30 09:18:09 crc kubenswrapper[4758]: I0130 09:18:09.031016 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerDied","Data":"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032"} Jan 30 09:18:09 crc kubenswrapper[4758]: I0130 09:18:09.062017 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-spvmr" podStartSLOduration=2.507199319 podStartE2EDuration="5.061996929s" podCreationTimestamp="2026-01-30 09:18:04 +0000 UTC" firstStartedPulling="2026-01-30 09:18:05.978567485 +0000 UTC m=+2890.950879036" lastFinishedPulling="2026-01-30 09:18:08.533365095 +0000 UTC m=+2893.505676646" observedRunningTime="2026-01-30 09:18:09.058029175 +0000 UTC m=+2894.030340736" watchObservedRunningTime="2026-01-30 09:18:09.061996929 +0000 UTC m=+2894.034308480" Jan 30 09:18:10 crc kubenswrapper[4758]: I0130 09:18:10.041136 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerStarted","Data":"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b"} Jan 30 09:18:11 crc kubenswrapper[4758]: I0130 09:18:11.768919 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:18:11 crc kubenswrapper[4758]: E0130 09:18:11.769503 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:18:13 crc kubenswrapper[4758]: I0130 09:18:13.075265 4758 generic.go:334] "Generic (PLEG): container finished" podID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerID="2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b" exitCode=0 Jan 30 09:18:13 crc kubenswrapper[4758]: I0130 09:18:13.075335 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerDied","Data":"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b"} Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.087208 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerStarted","Data":"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94"} Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.108225 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5xsj" podStartSLOduration=2.652365807 podStartE2EDuration="7.108189656s" podCreationTimestamp="2026-01-30 09:18:07 +0000 UTC" firstStartedPulling="2026-01-30 09:18:09.03246651 +0000 UTC m=+2894.004778061" lastFinishedPulling="2026-01-30 09:18:13.488290359 +0000 UTC m=+2898.460601910" observedRunningTime="2026-01-30 09:18:14.103120766 +0000 UTC m=+2899.075432337" watchObservedRunningTime="2026-01-30 09:18:14.108189656 +0000 UTC m=+2899.080501197" Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.646834 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.646876 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:14 crc kubenswrapper[4758]: I0130 09:18:14.693776 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:15 crc kubenswrapper[4758]: I0130 09:18:15.188564 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.077853 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.111185 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-spvmr" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="registry-server" containerID="cri-o://30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" gracePeriod=2 Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.421767 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.422082 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.489423 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.549088 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.627430 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") pod \"644832ef-4089-466f-9768-f30dfc4b6ef1\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.627555 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") pod \"644832ef-4089-466f-9768-f30dfc4b6ef1\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.627698 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") pod \"644832ef-4089-466f-9768-f30dfc4b6ef1\" (UID: \"644832ef-4089-466f-9768-f30dfc4b6ef1\") " Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.628878 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities" (OuterVolumeSpecName: "utilities") pod "644832ef-4089-466f-9768-f30dfc4b6ef1" (UID: "644832ef-4089-466f-9768-f30dfc4b6ef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.634317 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x" (OuterVolumeSpecName: "kube-api-access-5r24x") pod "644832ef-4089-466f-9768-f30dfc4b6ef1" (UID: "644832ef-4089-466f-9768-f30dfc4b6ef1"). InnerVolumeSpecName "kube-api-access-5r24x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.651633 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "644832ef-4089-466f-9768-f30dfc4b6ef1" (UID: "644832ef-4089-466f-9768-f30dfc4b6ef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.729817 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.729859 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644832ef-4089-466f-9768-f30dfc4b6ef1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:17 crc kubenswrapper[4758]: I0130 09:18:17.729875 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r24x\" (UniqueName: \"kubernetes.io/projected/644832ef-4089-466f-9768-f30dfc4b6ef1-kube-api-access-5r24x\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122709 4758 generic.go:334] "Generic (PLEG): container finished" podID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerID="30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" exitCode=0 Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122800 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerDied","Data":"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5"} Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122860 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spvmr" event={"ID":"644832ef-4089-466f-9768-f30dfc4b6ef1","Type":"ContainerDied","Data":"1493bc131fe2a14d48215a5096bfd60d6949daca73eab63a5d511884ee5335cb"} Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122888 4758 scope.go:117] "RemoveContainer" containerID="30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.122812 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spvmr" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.151300 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.162526 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-spvmr"] Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.176913 4758 scope.go:117] "RemoveContainer" containerID="7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.190581 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.216511 4758 scope.go:117] "RemoveContainer" containerID="c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.262652 4758 scope.go:117] "RemoveContainer" containerID="30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" Jan 30 09:18:18 crc kubenswrapper[4758]: E0130 09:18:18.263204 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5\": container with ID starting with 30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5 not found: ID does not exist" containerID="30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263239 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5"} err="failed to get container status \"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5\": rpc error: code = NotFound desc = could not find container \"30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5\": container with ID starting with 30733fe16b7593f5b9b61ec1ff9874356fdc8885062e76495614ee664cd2a1e5 not found: ID does not exist" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263263 4758 scope.go:117] "RemoveContainer" containerID="7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11" Jan 30 09:18:18 crc kubenswrapper[4758]: E0130 09:18:18.263480 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11\": container with ID starting with 7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11 not found: ID does not exist" containerID="7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263501 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11"} err="failed to get container status \"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11\": rpc error: code = NotFound desc = could not find container \"7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11\": container with ID starting with 7d520ef602d931fef5ce3dc298da5304b8722cba0c79a6a360ed892b0015be11 not found: ID does not exist" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263532 4758 scope.go:117] "RemoveContainer" containerID="c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f" Jan 30 09:18:18 crc kubenswrapper[4758]: E0130 09:18:18.263725 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f\": container with ID starting with c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f not found: ID does not exist" containerID="c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f" Jan 30 09:18:18 crc kubenswrapper[4758]: I0130 09:18:18.263747 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f"} err="failed to get container status \"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f\": rpc error: code = NotFound desc = could not find container \"c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f\": container with ID starting with c7cbe359014fffed28f791701bbd039658c41763337c99e38fb399565f0afa7f not found: ID does not exist" Jan 30 09:18:19 crc kubenswrapper[4758]: I0130 09:18:19.779663 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" path="/var/lib/kubelet/pods/644832ef-4089-466f-9768-f30dfc4b6ef1/volumes" Jan 30 09:18:19 crc kubenswrapper[4758]: I0130 09:18:19.874153 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.138047 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s5xsj" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="registry-server" containerID="cri-o://daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" gracePeriod=2 Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.570967 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.683270 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") pod \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.683695 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") pod \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.683873 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") pod \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\" (UID: \"e63975c5-3525-4e0b-9bd5-3dd93de36d38\") " Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.685963 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities" (OuterVolumeSpecName: "utilities") pod "e63975c5-3525-4e0b-9bd5-3dd93de36d38" (UID: "e63975c5-3525-4e0b-9bd5-3dd93de36d38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.690156 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm" (OuterVolumeSpecName: "kube-api-access-nntvm") pod "e63975c5-3525-4e0b-9bd5-3dd93de36d38" (UID: "e63975c5-3525-4e0b-9bd5-3dd93de36d38"). InnerVolumeSpecName "kube-api-access-nntvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.738103 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e63975c5-3525-4e0b-9bd5-3dd93de36d38" (UID: "e63975c5-3525-4e0b-9bd5-3dd93de36d38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.786371 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.786415 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63975c5-3525-4e0b-9bd5-3dd93de36d38-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:20 crc kubenswrapper[4758]: I0130 09:18:20.786428 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nntvm\" (UniqueName: \"kubernetes.io/projected/e63975c5-3525-4e0b-9bd5-3dd93de36d38-kube-api-access-nntvm\") on node \"crc\" DevicePath \"\"" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149398 4758 generic.go:334] "Generic (PLEG): container finished" podID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerID="daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" exitCode=0 Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149452 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5xsj" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149471 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerDied","Data":"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94"} Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149842 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5xsj" event={"ID":"e63975c5-3525-4e0b-9bd5-3dd93de36d38","Type":"ContainerDied","Data":"7cf806a302c6f18474a3bf8181f8ad7f7a77bc8704b82b5808c7dbe7f1566ffc"} Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.149866 4758 scope.go:117] "RemoveContainer" containerID="daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.167289 4758 scope.go:117] "RemoveContainer" containerID="2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.202873 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.207611 4758 scope.go:117] "RemoveContainer" containerID="041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.214014 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s5xsj"] Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.246168 4758 scope.go:117] "RemoveContainer" containerID="daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" Jan 30 09:18:21 crc kubenswrapper[4758]: E0130 09:18:21.246660 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94\": container with ID starting with daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94 not found: ID does not exist" containerID="daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.246739 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94"} err="failed to get container status \"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94\": rpc error: code = NotFound desc = could not find container \"daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94\": container with ID starting with daa9ecd4a19614f9e2137aadca7b47d427b8fecd53fb5ecf63587f7268742b94 not found: ID does not exist" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.246773 4758 scope.go:117] "RemoveContainer" containerID="2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b" Jan 30 09:18:21 crc kubenswrapper[4758]: E0130 09:18:21.247356 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b\": container with ID starting with 2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b not found: ID does not exist" containerID="2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.247379 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b"} err="failed to get container status \"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b\": rpc error: code = NotFound desc = could not find container \"2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b\": container with ID starting with 2ca08c0f57fddecc3f2ce4063e2a065b2605f514c647bb699a113511585cd75b not found: ID does not exist" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.247395 4758 scope.go:117] "RemoveContainer" containerID="041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032" Jan 30 09:18:21 crc kubenswrapper[4758]: E0130 09:18:21.247810 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032\": container with ID starting with 041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032 not found: ID does not exist" containerID="041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.247834 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032"} err="failed to get container status \"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032\": rpc error: code = NotFound desc = could not find container \"041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032\": container with ID starting with 041fa8232ac389f6201edcb34d0cb6beb86e85fed3036956d7f37844d7e47032 not found: ID does not exist" Jan 30 09:18:21 crc kubenswrapper[4758]: I0130 09:18:21.778226 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" path="/var/lib/kubelet/pods/e63975c5-3525-4e0b-9bd5-3dd93de36d38/volumes" Jan 30 09:18:25 crc kubenswrapper[4758]: I0130 09:18:25.783526 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:18:25 crc kubenswrapper[4758]: E0130 09:18:25.784799 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:18:38 crc kubenswrapper[4758]: I0130 09:18:38.769098 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:18:38 crc kubenswrapper[4758]: E0130 09:18:38.769749 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:18:52 crc kubenswrapper[4758]: I0130 09:18:52.769049 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:18:52 crc kubenswrapper[4758]: E0130 09:18:52.769716 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:04 crc kubenswrapper[4758]: I0130 09:19:04.768906 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:04 crc kubenswrapper[4758]: E0130 09:19:04.769706 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:17 crc kubenswrapper[4758]: I0130 09:19:17.768240 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:17 crc kubenswrapper[4758]: E0130 09:19:17.768959 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:31 crc kubenswrapper[4758]: I0130 09:19:31.769296 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:31 crc kubenswrapper[4758]: E0130 09:19:31.770205 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:42 crc kubenswrapper[4758]: I0130 09:19:42.768566 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:42 crc kubenswrapper[4758]: E0130 09:19:42.769333 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:19:56 crc kubenswrapper[4758]: I0130 09:19:56.768444 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:19:56 crc kubenswrapper[4758]: E0130 09:19:56.769213 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:08 crc kubenswrapper[4758]: I0130 09:20:08.768573 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:20:08 crc kubenswrapper[4758]: E0130 09:20:08.769299 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:15 crc kubenswrapper[4758]: I0130 09:20:15.076693 4758 generic.go:334] "Generic (PLEG): container finished" podID="38932896-a566-4440-b672-33909cb638b0" containerID="eb7433698d4689365258f066619568b60512753789e2f4616d12bb0530207c37" exitCode=0 Jan 30 09:20:15 crc kubenswrapper[4758]: I0130 09:20:15.076784 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" event={"ID":"38932896-a566-4440-b672-33909cb638b0","Type":"ContainerDied","Data":"eb7433698d4689365258f066619568b60512753789e2f4616d12bb0530207c37"} Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.481874 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604260 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604386 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604497 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604565 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604692 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604732 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.604816 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") pod \"38932896-a566-4440-b672-33909cb638b0\" (UID: \"38932896-a566-4440-b672-33909cb638b0\") " Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.610547 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl" (OuterVolumeSpecName: "kube-api-access-qdvbl") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "kube-api-access-qdvbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.619750 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.632211 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.634308 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.634949 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory" (OuterVolumeSpecName: "inventory") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.638800 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.640408 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "38932896-a566-4440-b672-33909cb638b0" (UID: "38932896-a566-4440-b672-33909cb638b0"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707621 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707655 4758 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707665 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdvbl\" (UniqueName: \"kubernetes.io/projected/38932896-a566-4440-b672-33909cb638b0-kube-api-access-qdvbl\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707674 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707684 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707719 4758 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:16 crc kubenswrapper[4758]: I0130 09:20:16.707731 4758 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/38932896-a566-4440-b672-33909cb638b0-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 09:20:17 crc kubenswrapper[4758]: I0130 09:20:17.096418 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" event={"ID":"38932896-a566-4440-b672-33909cb638b0","Type":"ContainerDied","Data":"4ad968b66da815adc412e79bec231c9a8b88fe94c19e206b409cd68c1ca8ab01"} Jan 30 09:20:17 crc kubenswrapper[4758]: I0130 09:20:17.096738 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad968b66da815adc412e79bec231c9a8b88fe94c19e206b409cd68c1ca8ab01" Jan 30 09:20:17 crc kubenswrapper[4758]: I0130 09:20:17.096483 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn" Jan 30 09:20:23 crc kubenswrapper[4758]: I0130 09:20:23.768642 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:20:23 crc kubenswrapper[4758]: E0130 09:20:23.769157 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:36 crc kubenswrapper[4758]: I0130 09:20:36.769245 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:20:36 crc kubenswrapper[4758]: E0130 09:20:36.770216 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:48 crc kubenswrapper[4758]: I0130 09:20:48.769118 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:20:48 crc kubenswrapper[4758]: E0130 09:20:48.769723 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.594700 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.596828 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="extract-utilities" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.596936 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="extract-utilities" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.596998 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="extract-content" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597073 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="extract-content" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597149 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597209 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597272 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="extract-content" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597324 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="extract-content" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597406 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38932896-a566-4440-b672-33909cb638b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597465 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="38932896-a566-4440-b672-33909cb638b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597535 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="extract-utilities" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597602 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="extract-utilities" Jan 30 09:20:58 crc kubenswrapper[4758]: E0130 09:20:58.597666 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.597733 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.598012 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="38932896-a566-4440-b672-33909cb638b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.598110 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="644832ef-4089-466f-9768-f30dfc4b6ef1" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.598173 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63975c5-3525-4e0b-9bd5-3dd93de36d38" containerName="registry-server" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.598827 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.601182 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bnbxz" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.601419 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.601574 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.603014 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.607669 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.700948 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701022 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701207 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701309 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701388 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701646 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701760 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701802 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.701846 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.803857 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.803921 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.803941 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.803980 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804028 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804087 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804133 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804164 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804193 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.804672 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.805104 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.805249 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.805463 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.805915 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.811112 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.811338 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.814087 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.825897 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.835407 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " pod="openstack/tempest-tests-tempest" Jan 30 09:20:58 crc kubenswrapper[4758]: I0130 09:20:58.940975 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:20:59 crc kubenswrapper[4758]: I0130 09:20:59.368027 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 09:20:59 crc kubenswrapper[4758]: I0130 09:20:59.372288 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:20:59 crc kubenswrapper[4758]: I0130 09:20:59.452904 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"110e1168-332c-4165-bd6e-47419c571681","Type":"ContainerStarted","Data":"333f254addae44025e00aa879c29d6e6ca58b2430bd9e1d7e1eba46b32166b13"} Jan 30 09:21:00 crc kubenswrapper[4758]: I0130 09:21:00.768633 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:00 crc kubenswrapper[4758]: E0130 09:21:00.769155 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:21:15 crc kubenswrapper[4758]: I0130 09:21:15.777893 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:15 crc kubenswrapper[4758]: E0130 09:21:15.779364 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:21:27 crc kubenswrapper[4758]: I0130 09:21:27.771248 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:27 crc kubenswrapper[4758]: E0130 09:21:27.772025 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:21:33 crc kubenswrapper[4758]: E0130 09:21:33.100656 4758 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 09:21:33 crc kubenswrapper[4758]: E0130 09:21:33.112209 4758 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdxsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(110e1168-332c-4165-bd6e-47419c571681): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 09:21:33 crc kubenswrapper[4758]: E0130 09:21:33.113703 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="110e1168-332c-4165-bd6e-47419c571681" Jan 30 09:21:33 crc kubenswrapper[4758]: E0130 09:21:33.757234 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="110e1168-332c-4165-bd6e-47419c571681" Jan 30 09:21:40 crc kubenswrapper[4758]: I0130 09:21:40.769101 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:40 crc kubenswrapper[4758]: E0130 09:21:40.769814 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:21:48 crc kubenswrapper[4758]: I0130 09:21:48.891737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"110e1168-332c-4165-bd6e-47419c571681","Type":"ContainerStarted","Data":"9585d72b864ad0afa161d8615e2e581e1849cfc14f137d53455eed18ce1d77db"} Jan 30 09:21:48 crc kubenswrapper[4758]: I0130 09:21:48.910996 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.075146353 podStartE2EDuration="51.910784209s" podCreationTimestamp="2026-01-30 09:20:57 +0000 UTC" firstStartedPulling="2026-01-30 09:20:59.372003016 +0000 UTC m=+3064.344314567" lastFinishedPulling="2026-01-30 09:21:47.207640872 +0000 UTC m=+3112.179952423" observedRunningTime="2026-01-30 09:21:48.906840585 +0000 UTC m=+3113.879152136" watchObservedRunningTime="2026-01-30 09:21:48.910784209 +0000 UTC m=+3113.883095760" Jan 30 09:21:53 crc kubenswrapper[4758]: I0130 09:21:53.768579 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:21:53 crc kubenswrapper[4758]: E0130 09:21:53.769374 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:22:05 crc kubenswrapper[4758]: I0130 09:22:05.777992 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:22:05 crc kubenswrapper[4758]: E0130 09:22:05.779132 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:22:17 crc kubenswrapper[4758]: I0130 09:22:17.768866 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:22:17 crc kubenswrapper[4758]: E0130 09:22:17.769629 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:22:30 crc kubenswrapper[4758]: I0130 09:22:30.768943 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:22:31 crc kubenswrapper[4758]: I0130 09:22:31.259823 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b"} Jan 30 09:24:52 crc kubenswrapper[4758]: I0130 09:24:52.387816 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:24:52 crc kubenswrapper[4758]: I0130 09:24:52.388350 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:25:02 crc kubenswrapper[4758]: I0130 09:25:02.572821 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 09:25:02 crc kubenswrapper[4758]: I0130 09:25:02.572809 4758 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 09:25:02 crc kubenswrapper[4758]: I0130 09:25:02.572809 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 09:25:02 crc kubenswrapper[4758]: I0130 09:25:02.572812 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="fd2d2fe7-5dac-4f3b-80f6-650712925495" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.857653 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.860381 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.901884 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.902300 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.902730 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:16 crc kubenswrapper[4758]: I0130 09:25:16.944813 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.003652 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.003709 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.003807 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.004380 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.004394 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.024582 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") pod \"community-operators-4fkv7\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:17 crc kubenswrapper[4758]: I0130 09:25:17.185449 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:18 crc kubenswrapper[4758]: I0130 09:25:18.509268 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:18 crc kubenswrapper[4758]: W0130 09:25:18.514465 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6796d9ff_9d00_4d02_9bab_b5782c937e9c.slice/crio-b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107 WatchSource:0}: Error finding container b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107: Status 404 returned error can't find the container with id b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107 Jan 30 09:25:18 crc kubenswrapper[4758]: I0130 09:25:18.599933 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerStarted","Data":"b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107"} Jan 30 09:25:19 crc kubenswrapper[4758]: I0130 09:25:19.612646 4758 generic.go:334] "Generic (PLEG): container finished" podID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerID="7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b" exitCode=0 Jan 30 09:25:19 crc kubenswrapper[4758]: I0130 09:25:19.612710 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerDied","Data":"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b"} Jan 30 09:25:20 crc kubenswrapper[4758]: I0130 09:25:20.626615 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerStarted","Data":"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab"} Jan 30 09:25:22 crc kubenswrapper[4758]: I0130 09:25:22.387522 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:25:22 crc kubenswrapper[4758]: I0130 09:25:22.387921 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:25:22 crc kubenswrapper[4758]: I0130 09:25:22.643641 4758 generic.go:334] "Generic (PLEG): container finished" podID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerID="b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab" exitCode=0 Jan 30 09:25:22 crc kubenswrapper[4758]: I0130 09:25:22.643693 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerDied","Data":"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab"} Jan 30 09:25:23 crc kubenswrapper[4758]: I0130 09:25:23.654830 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerStarted","Data":"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b"} Jan 30 09:25:23 crc kubenswrapper[4758]: I0130 09:25:23.679276 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4fkv7" podStartSLOduration=4.27346879 podStartE2EDuration="7.679255982s" podCreationTimestamp="2026-01-30 09:25:16 +0000 UTC" firstStartedPulling="2026-01-30 09:25:19.616462527 +0000 UTC m=+3324.588774078" lastFinishedPulling="2026-01-30 09:25:23.022249719 +0000 UTC m=+3327.994561270" observedRunningTime="2026-01-30 09:25:23.677727115 +0000 UTC m=+3328.650038686" watchObservedRunningTime="2026-01-30 09:25:23.679255982 +0000 UTC m=+3328.651567533" Jan 30 09:25:27 crc kubenswrapper[4758]: I0130 09:25:27.186225 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:27 crc kubenswrapper[4758]: I0130 09:25:27.186860 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:28 crc kubenswrapper[4758]: I0130 09:25:28.241375 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4fkv7" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" probeResult="failure" output=< Jan 30 09:25:28 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:25:28 crc kubenswrapper[4758]: > Jan 30 09:25:37 crc kubenswrapper[4758]: I0130 09:25:37.245303 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:37 crc kubenswrapper[4758]: I0130 09:25:37.296543 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:37 crc kubenswrapper[4758]: I0130 09:25:37.484824 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:38 crc kubenswrapper[4758]: I0130 09:25:38.781953 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4fkv7" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" containerID="cri-o://7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" gracePeriod=2 Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.644182 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.801944 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") pod \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.802139 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") pod \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.802211 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") pod \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\" (UID: \"6796d9ff-9d00-4d02-9bab-b5782c937e9c\") " Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.803678 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities" (OuterVolumeSpecName: "utilities") pod "6796d9ff-9d00-4d02-9bab-b5782c937e9c" (UID: "6796d9ff-9d00-4d02-9bab-b5782c937e9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806477 4758 generic.go:334] "Generic (PLEG): container finished" podID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerID="7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" exitCode=0 Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806534 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerDied","Data":"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b"} Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806564 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4fkv7" event={"ID":"6796d9ff-9d00-4d02-9bab-b5782c937e9c","Type":"ContainerDied","Data":"b8e54a068799832547885c617e04ec61b45bb92e0212dfdc2af54909d1236107"} Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806596 4758 scope.go:117] "RemoveContainer" containerID="7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.806695 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4fkv7" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.819953 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26" (OuterVolumeSpecName: "kube-api-access-xvw26") pod "6796d9ff-9d00-4d02-9bab-b5782c937e9c" (UID: "6796d9ff-9d00-4d02-9bab-b5782c937e9c"). InnerVolumeSpecName "kube-api-access-xvw26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.864428 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6796d9ff-9d00-4d02-9bab-b5782c937e9c" (UID: "6796d9ff-9d00-4d02-9bab-b5782c937e9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.886421 4758 scope.go:117] "RemoveContainer" containerID="b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.906599 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.906623 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6796d9ff-9d00-4d02-9bab-b5782c937e9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.906634 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvw26\" (UniqueName: \"kubernetes.io/projected/6796d9ff-9d00-4d02-9bab-b5782c937e9c-kube-api-access-xvw26\") on node \"crc\" DevicePath \"\"" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.910237 4758 scope.go:117] "RemoveContainer" containerID="7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.948200 4758 scope.go:117] "RemoveContainer" containerID="7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" Jan 30 09:25:39 crc kubenswrapper[4758]: E0130 09:25:39.948594 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b\": container with ID starting with 7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b not found: ID does not exist" containerID="7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.949644 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b"} err="failed to get container status \"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b\": rpc error: code = NotFound desc = could not find container \"7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b\": container with ID starting with 7385863695bfad3817d0af141006c29daa18f6185435760bcae51f98a480e20b not found: ID does not exist" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.949749 4758 scope.go:117] "RemoveContainer" containerID="b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab" Jan 30 09:25:39 crc kubenswrapper[4758]: E0130 09:25:39.950198 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab\": container with ID starting with b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab not found: ID does not exist" containerID="b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.950255 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab"} err="failed to get container status \"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab\": rpc error: code = NotFound desc = could not find container \"b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab\": container with ID starting with b74a5a4d14a792144b3981d9f37eae405aed8a6446ab00d4b60d7c6440188eab not found: ID does not exist" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.950290 4758 scope.go:117] "RemoveContainer" containerID="7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b" Jan 30 09:25:39 crc kubenswrapper[4758]: E0130 09:25:39.950659 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b\": container with ID starting with 7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b not found: ID does not exist" containerID="7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b" Jan 30 09:25:39 crc kubenswrapper[4758]: I0130 09:25:39.950695 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b"} err="failed to get container status \"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b\": rpc error: code = NotFound desc = could not find container \"7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b\": container with ID starting with 7f4214dc2939d2076cbf8c8458c1711861e4678d4cd967e5f0cb2faf0e8f485b not found: ID does not exist" Jan 30 09:25:40 crc kubenswrapper[4758]: I0130 09:25:40.143903 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:40 crc kubenswrapper[4758]: I0130 09:25:40.151980 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4fkv7"] Jan 30 09:25:41 crc kubenswrapper[4758]: I0130 09:25:41.780754 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" path="/var/lib/kubelet/pods/6796d9ff-9d00-4d02-9bab-b5782c937e9c/volumes" Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.387565 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.388160 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.388210 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.388956 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.389008 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b" gracePeriod=600 Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.916733 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b" exitCode=0 Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.916778 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b"} Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.917186 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1"} Jan 30 09:25:52 crc kubenswrapper[4758]: I0130 09:25:52.917218 4758 scope.go:117] "RemoveContainer" containerID="5537e3d6279480875e088354431763ffac78033870c26ef9a23ae54beb146202" Jan 30 09:27:52 crc kubenswrapper[4758]: I0130 09:27:52.387699 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:27:52 crc kubenswrapper[4758]: I0130 09:27:52.388194 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.027171 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:15 crc kubenswrapper[4758]: E0130 09:28:15.034969 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.035006 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" Jan 30 09:28:15 crc kubenswrapper[4758]: E0130 09:28:15.035031 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="extract-utilities" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.035052 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="extract-utilities" Jan 30 09:28:15 crc kubenswrapper[4758]: E0130 09:28:15.035075 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="extract-content" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.035082 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="extract-content" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.035265 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="6796d9ff-9d00-4d02-9bab-b5782c937e9c" containerName="registry-server" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.036640 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.037493 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.106622 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.106691 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.106814 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.208303 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.208474 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.208512 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.209111 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.209169 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.229791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") pod \"certified-operators-f674k\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:15 crc kubenswrapper[4758]: I0130 09:28:15.359731 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:16 crc kubenswrapper[4758]: I0130 09:28:16.059820 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:16 crc kubenswrapper[4758]: W0130 09:28:16.087530 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68c558e8_a714_430a_884f_794eb739486b.slice/crio-8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446 WatchSource:0}: Error finding container 8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446: Status 404 returned error can't find the container with id 8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446 Jan 30 09:28:16 crc kubenswrapper[4758]: I0130 09:28:16.187127 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerStarted","Data":"8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446"} Jan 30 09:28:17 crc kubenswrapper[4758]: I0130 09:28:17.212225 4758 generic.go:334] "Generic (PLEG): container finished" podID="68c558e8-a714-430a-884f-794eb739486b" containerID="da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79" exitCode=0 Jan 30 09:28:17 crc kubenswrapper[4758]: I0130 09:28:17.213399 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerDied","Data":"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79"} Jan 30 09:28:17 crc kubenswrapper[4758]: I0130 09:28:17.215416 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:28:18 crc kubenswrapper[4758]: I0130 09:28:18.224276 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerStarted","Data":"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db"} Jan 30 09:28:20 crc kubenswrapper[4758]: I0130 09:28:20.243769 4758 generic.go:334] "Generic (PLEG): container finished" podID="68c558e8-a714-430a-884f-794eb739486b" containerID="5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db" exitCode=0 Jan 30 09:28:20 crc kubenswrapper[4758]: I0130 09:28:20.243841 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerDied","Data":"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db"} Jan 30 09:28:21 crc kubenswrapper[4758]: I0130 09:28:21.258984 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerStarted","Data":"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f"} Jan 30 09:28:22 crc kubenswrapper[4758]: I0130 09:28:22.387492 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:28:22 crc kubenswrapper[4758]: I0130 09:28:22.387874 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:28:25 crc kubenswrapper[4758]: I0130 09:28:25.359957 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:25 crc kubenswrapper[4758]: I0130 09:28:25.360321 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:26 crc kubenswrapper[4758]: I0130 09:28:26.409004 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f674k" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" probeResult="failure" output=< Jan 30 09:28:26 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:28:26 crc kubenswrapper[4758]: > Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.667260 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f674k" podStartSLOduration=10.131771322 podStartE2EDuration="13.667236972s" podCreationTimestamp="2026-01-30 09:28:14 +0000 UTC" firstStartedPulling="2026-01-30 09:28:17.215160384 +0000 UTC m=+3502.187471935" lastFinishedPulling="2026-01-30 09:28:20.750626034 +0000 UTC m=+3505.722937585" observedRunningTime="2026-01-30 09:28:21.283721199 +0000 UTC m=+3506.256032750" watchObservedRunningTime="2026-01-30 09:28:27.667236972 +0000 UTC m=+3512.639548523" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.676726 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.679568 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.690399 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.772520 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.772593 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.772953 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.874978 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.875116 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.875168 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.875872 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.875887 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:27 crc kubenswrapper[4758]: I0130 09:28:27.895791 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") pod \"redhat-marketplace-rdw24\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:28 crc kubenswrapper[4758]: I0130 09:28:28.000803 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:28 crc kubenswrapper[4758]: I0130 09:28:28.525926 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:28 crc kubenswrapper[4758]: W0130 09:28:28.551192 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc6c17c_d7f2_4f48_86a1_74d2013747d0.slice/crio-95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6 WatchSource:0}: Error finding container 95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6: Status 404 returned error can't find the container with id 95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6 Jan 30 09:28:29 crc kubenswrapper[4758]: I0130 09:28:29.336518 4758 generic.go:334] "Generic (PLEG): container finished" podID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerID="0ef8472d48cd7ea49c109f42f514daca323b939fbce39db440a086c70484a1ce" exitCode=0 Jan 30 09:28:29 crc kubenswrapper[4758]: I0130 09:28:29.336563 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerDied","Data":"0ef8472d48cd7ea49c109f42f514daca323b939fbce39db440a086c70484a1ce"} Jan 30 09:28:29 crc kubenswrapper[4758]: I0130 09:28:29.337950 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerStarted","Data":"95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6"} Jan 30 09:28:30 crc kubenswrapper[4758]: I0130 09:28:30.347526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerStarted","Data":"d8b83d753c66a1a5a7a5af61e0d4a9366f21f197dfab723618542a9e67991d00"} Jan 30 09:28:31 crc kubenswrapper[4758]: I0130 09:28:31.358263 4758 generic.go:334] "Generic (PLEG): container finished" podID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerID="d8b83d753c66a1a5a7a5af61e0d4a9366f21f197dfab723618542a9e67991d00" exitCode=0 Jan 30 09:28:31 crc kubenswrapper[4758]: I0130 09:28:31.358353 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerDied","Data":"d8b83d753c66a1a5a7a5af61e0d4a9366f21f197dfab723618542a9e67991d00"} Jan 30 09:28:32 crc kubenswrapper[4758]: I0130 09:28:32.369282 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerStarted","Data":"a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d"} Jan 30 09:28:32 crc kubenswrapper[4758]: I0130 09:28:32.389511 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdw24" podStartSLOduration=2.998105518 podStartE2EDuration="5.389479242s" podCreationTimestamp="2026-01-30 09:28:27 +0000 UTC" firstStartedPulling="2026-01-30 09:28:29.33952968 +0000 UTC m=+3514.311841231" lastFinishedPulling="2026-01-30 09:28:31.730903404 +0000 UTC m=+3516.703214955" observedRunningTime="2026-01-30 09:28:32.387100698 +0000 UTC m=+3517.359412269" watchObservedRunningTime="2026-01-30 09:28:32.389479242 +0000 UTC m=+3517.361790793" Jan 30 09:28:35 crc kubenswrapper[4758]: I0130 09:28:35.461500 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:35 crc kubenswrapper[4758]: I0130 09:28:35.627125 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:36 crc kubenswrapper[4758]: I0130 09:28:36.046157 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:37 crc kubenswrapper[4758]: I0130 09:28:37.423188 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f674k" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" containerID="cri-o://a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" gracePeriod=2 Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.000883 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.002709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.089309 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.184694 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") pod \"68c558e8-a714-430a-884f-794eb739486b\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.184903 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") pod \"68c558e8-a714-430a-884f-794eb739486b\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.185060 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") pod \"68c558e8-a714-430a-884f-794eb739486b\" (UID: \"68c558e8-a714-430a-884f-794eb739486b\") " Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.186883 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities" (OuterVolumeSpecName: "utilities") pod "68c558e8-a714-430a-884f-794eb739486b" (UID: "68c558e8-a714-430a-884f-794eb739486b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.206341 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9" (OuterVolumeSpecName: "kube-api-access-nfrn9") pod "68c558e8-a714-430a-884f-794eb739486b" (UID: "68c558e8-a714-430a-884f-794eb739486b"). InnerVolumeSpecName "kube-api-access-nfrn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.256305 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68c558e8-a714-430a-884f-794eb739486b" (UID: "68c558e8-a714-430a-884f-794eb739486b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.287188 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.287224 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c558e8-a714-430a-884f-794eb739486b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.287234 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfrn9\" (UniqueName: \"kubernetes.io/projected/68c558e8-a714-430a-884f-794eb739486b-kube-api-access-nfrn9\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.433777 4758 generic.go:334] "Generic (PLEG): container finished" podID="68c558e8-a714-430a-884f-794eb739486b" containerID="a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" exitCode=0 Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.434930 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f674k" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.439170 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerDied","Data":"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f"} Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.439216 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f674k" event={"ID":"68c558e8-a714-430a-884f-794eb739486b","Type":"ContainerDied","Data":"8cc8b0eb3d73f3a13e471a9a05d23f16a1aec69389d5064224635640089f6446"} Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.439233 4758 scope.go:117] "RemoveContainer" containerID="a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.468636 4758 scope.go:117] "RemoveContainer" containerID="5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.475514 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.488222 4758 scope.go:117] "RemoveContainer" containerID="da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.489314 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f674k"] Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.529970 4758 scope.go:117] "RemoveContainer" containerID="a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" Jan 30 09:28:38 crc kubenswrapper[4758]: E0130 09:28:38.530534 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f\": container with ID starting with a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f not found: ID does not exist" containerID="a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.530581 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f"} err="failed to get container status \"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f\": rpc error: code = NotFound desc = could not find container \"a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f\": container with ID starting with a3b64a6408bc6dc25f47b4e5e280831cf3bbec9c9bf2c7f46f985a10d9accf5f not found: ID does not exist" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.530609 4758 scope.go:117] "RemoveContainer" containerID="5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db" Jan 30 09:28:38 crc kubenswrapper[4758]: E0130 09:28:38.530877 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db\": container with ID starting with 5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db not found: ID does not exist" containerID="5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.530929 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db"} err="failed to get container status \"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db\": rpc error: code = NotFound desc = could not find container \"5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db\": container with ID starting with 5981ca67e8364981966e1bbbe6614f3e9e211c59cb9dc59d4debbb2a7c4372db not found: ID does not exist" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.530946 4758 scope.go:117] "RemoveContainer" containerID="da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79" Jan 30 09:28:38 crc kubenswrapper[4758]: E0130 09:28:38.531297 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79\": container with ID starting with da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79 not found: ID does not exist" containerID="da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79" Jan 30 09:28:38 crc kubenswrapper[4758]: I0130 09:28:38.531325 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79"} err="failed to get container status \"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79\": rpc error: code = NotFound desc = could not find container \"da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79\": container with ID starting with da1ce0f8868d67d1ecf858bcc04135feea246189078d5ab142de13def4c14a79 not found: ID does not exist" Jan 30 09:28:39 crc kubenswrapper[4758]: I0130 09:28:39.052093 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rdw24" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" probeResult="failure" output=< Jan 30 09:28:39 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:28:39 crc kubenswrapper[4758]: > Jan 30 09:28:39 crc kubenswrapper[4758]: I0130 09:28:39.779093 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c558e8-a714-430a-884f-794eb739486b" path="/var/lib/kubelet/pods/68c558e8-a714-430a-884f-794eb739486b/volumes" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.421348 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:28:41 crc kubenswrapper[4758]: E0130 09:28:41.422052 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="extract-utilities" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.422067 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="extract-utilities" Jan 30 09:28:41 crc kubenswrapper[4758]: E0130 09:28:41.422086 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.422092 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" Jan 30 09:28:41 crc kubenswrapper[4758]: E0130 09:28:41.422114 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="extract-content" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.422121 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="extract-content" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.422300 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c558e8-a714-430a-884f-794eb739486b" containerName="registry-server" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.424818 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.441425 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.544993 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.545050 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.545155 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.646404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.646454 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.646575 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.646957 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.647196 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.667539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") pod \"redhat-operators-g4zt4\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:41 crc kubenswrapper[4758]: I0130 09:28:41.769601 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:28:42 crc kubenswrapper[4758]: I0130 09:28:42.326191 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:28:42 crc kubenswrapper[4758]: I0130 09:28:42.492148 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerStarted","Data":"b80cb638ffc2d4c1d2a9212d574aa63f341273f37e50fb903af0a0b72da45f1c"} Jan 30 09:28:43 crc kubenswrapper[4758]: I0130 09:28:43.502287 4758 generic.go:334] "Generic (PLEG): container finished" podID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerID="bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987" exitCode=0 Jan 30 09:28:43 crc kubenswrapper[4758]: I0130 09:28:43.502519 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerDied","Data":"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987"} Jan 30 09:28:45 crc kubenswrapper[4758]: I0130 09:28:45.520519 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerStarted","Data":"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463"} Jan 30 09:28:48 crc kubenswrapper[4758]: I0130 09:28:48.048268 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:48 crc kubenswrapper[4758]: I0130 09:28:48.098154 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:49 crc kubenswrapper[4758]: I0130 09:28:49.047799 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:49 crc kubenswrapper[4758]: I0130 09:28:49.549833 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rdw24" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" containerID="cri-o://a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d" gracePeriod=2 Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.561183 4758 generic.go:334] "Generic (PLEG): container finished" podID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerID="a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d" exitCode=0 Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.561247 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerDied","Data":"a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d"} Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.671837 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.720475 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") pod \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.721144 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") pod \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.721299 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") pod \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\" (UID: \"adc6c17c-d7f2-4f48-86a1-74d2013747d0\") " Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.725066 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities" (OuterVolumeSpecName: "utilities") pod "adc6c17c-d7f2-4f48-86a1-74d2013747d0" (UID: "adc6c17c-d7f2-4f48-86a1-74d2013747d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.727611 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5" (OuterVolumeSpecName: "kube-api-access-6slc5") pod "adc6c17c-d7f2-4f48-86a1-74d2013747d0" (UID: "adc6c17c-d7f2-4f48-86a1-74d2013747d0"). InnerVolumeSpecName "kube-api-access-6slc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.736512 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adc6c17c-d7f2-4f48-86a1-74d2013747d0" (UID: "adc6c17c-d7f2-4f48-86a1-74d2013747d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.823702 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.824022 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6slc5\" (UniqueName: \"kubernetes.io/projected/adc6c17c-d7f2-4f48-86a1-74d2013747d0-kube-api-access-6slc5\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:50 crc kubenswrapper[4758]: I0130 09:28:50.824300 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc6c17c-d7f2-4f48-86a1-74d2013747d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.580207 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdw24" event={"ID":"adc6c17c-d7f2-4f48-86a1-74d2013747d0","Type":"ContainerDied","Data":"95efbe5521f1c49e3512a155a89429784b1e25da062825e4d0e26020ec044cf6"} Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.580546 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdw24" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.580577 4758 scope.go:117] "RemoveContainer" containerID="a3b38454efa0d468a8586bdc7dc3b21138a28c4f2f35e0fe9b0d3f1ea3f3694d" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.604715 4758 scope.go:117] "RemoveContainer" containerID="d8b83d753c66a1a5a7a5af61e0d4a9366f21f197dfab723618542a9e67991d00" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.623060 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.630481 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdw24"] Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.654742 4758 scope.go:117] "RemoveContainer" containerID="0ef8472d48cd7ea49c109f42f514daca323b939fbce39db440a086c70484a1ce" Jan 30 09:28:51 crc kubenswrapper[4758]: I0130 09:28:51.778371 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" path="/var/lib/kubelet/pods/adc6c17c-d7f2-4f48-86a1-74d2013747d0/volumes" Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.387537 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.387589 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.387744 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.388415 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.388469 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" gracePeriod=600 Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.591416 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" exitCode=0 Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.591477 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1"} Jan 30 09:28:52 crc kubenswrapper[4758]: I0130 09:28:52.591511 4758 scope.go:117] "RemoveContainer" containerID="cb507a492aec7fadcf39a2125e28a32759b1a8e40688e90ab277c9b1c21ac13b" Jan 30 09:28:53 crc kubenswrapper[4758]: E0130 09:28:53.014830 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:28:53 crc kubenswrapper[4758]: I0130 09:28:53.605555 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:28:53 crc kubenswrapper[4758]: E0130 09:28:53.605896 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:28:53 crc kubenswrapper[4758]: I0130 09:28:53.606688 4758 generic.go:334] "Generic (PLEG): container finished" podID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerID="13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463" exitCode=0 Jan 30 09:28:53 crc kubenswrapper[4758]: I0130 09:28:53.606743 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerDied","Data":"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463"} Jan 30 09:28:54 crc kubenswrapper[4758]: I0130 09:28:54.618951 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerStarted","Data":"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9"} Jan 30 09:28:54 crc kubenswrapper[4758]: I0130 09:28:54.642973 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4zt4" podStartSLOduration=3.039037715 podStartE2EDuration="13.642952757s" podCreationTimestamp="2026-01-30 09:28:41 +0000 UTC" firstStartedPulling="2026-01-30 09:28:43.507629011 +0000 UTC m=+3528.479940562" lastFinishedPulling="2026-01-30 09:28:54.111544053 +0000 UTC m=+3539.083855604" observedRunningTime="2026-01-30 09:28:54.639947332 +0000 UTC m=+3539.612258893" watchObservedRunningTime="2026-01-30 09:28:54.642952757 +0000 UTC m=+3539.615264318" Jan 30 09:29:01 crc kubenswrapper[4758]: I0130 09:29:01.780171 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:01 crc kubenswrapper[4758]: I0130 09:29:01.780753 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:02 crc kubenswrapper[4758]: I0130 09:29:02.821603 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" probeResult="failure" output=< Jan 30 09:29:02 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:29:02 crc kubenswrapper[4758]: > Jan 30 09:29:07 crc kubenswrapper[4758]: I0130 09:29:07.770584 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:07 crc kubenswrapper[4758]: E0130 09:29:07.771342 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:29:12 crc kubenswrapper[4758]: I0130 09:29:12.825403 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" probeResult="failure" output=< Jan 30 09:29:12 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:29:12 crc kubenswrapper[4758]: > Jan 30 09:29:19 crc kubenswrapper[4758]: I0130 09:29:19.769470 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:19 crc kubenswrapper[4758]: E0130 09:29:19.770083 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:29:22 crc kubenswrapper[4758]: I0130 09:29:22.824323 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" probeResult="failure" output=< Jan 30 09:29:22 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:29:22 crc kubenswrapper[4758]: > Jan 30 09:29:31 crc kubenswrapper[4758]: I0130 09:29:31.768710 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:31 crc kubenswrapper[4758]: E0130 09:29:31.769734 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:29:32 crc kubenswrapper[4758]: I0130 09:29:32.817351 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" probeResult="failure" output=< Jan 30 09:29:32 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:29:32 crc kubenswrapper[4758]: > Jan 30 09:29:41 crc kubenswrapper[4758]: I0130 09:29:41.819717 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:41 crc kubenswrapper[4758]: I0130 09:29:41.882433 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:42 crc kubenswrapper[4758]: I0130 09:29:42.644980 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.023314 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4zt4" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" containerID="cri-o://8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" gracePeriod=2 Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.740521 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.912912 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") pod \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.913467 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") pod \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.913783 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") pod \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\" (UID: \"99afc7c0-1f67-4dcd-93f3-6fec892cdb11\") " Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.914078 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities" (OuterVolumeSpecName: "utilities") pod "99afc7c0-1f67-4dcd-93f3-6fec892cdb11" (UID: "99afc7c0-1f67-4dcd-93f3-6fec892cdb11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.917802 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:29:43 crc kubenswrapper[4758]: I0130 09:29:43.935587 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s" (OuterVolumeSpecName: "kube-api-access-7ww6s") pod "99afc7c0-1f67-4dcd-93f3-6fec892cdb11" (UID: "99afc7c0-1f67-4dcd-93f3-6fec892cdb11"). InnerVolumeSpecName "kube-api-access-7ww6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.020064 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ww6s\" (UniqueName: \"kubernetes.io/projected/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-kube-api-access-7ww6s\") on node \"crc\" DevicePath \"\"" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.033659 4758 generic.go:334] "Generic (PLEG): container finished" podID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerID="8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" exitCode=0 Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.033735 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4zt4" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.033746 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerDied","Data":"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9"} Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.034912 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4zt4" event={"ID":"99afc7c0-1f67-4dcd-93f3-6fec892cdb11","Type":"ContainerDied","Data":"b80cb638ffc2d4c1d2a9212d574aa63f341273f37e50fb903af0a0b72da45f1c"} Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.034944 4758 scope.go:117] "RemoveContainer" containerID="8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.043463 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99afc7c0-1f67-4dcd-93f3-6fec892cdb11" (UID: "99afc7c0-1f67-4dcd-93f3-6fec892cdb11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.055007 4758 scope.go:117] "RemoveContainer" containerID="13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.078558 4758 scope.go:117] "RemoveContainer" containerID="bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.121977 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99afc7c0-1f67-4dcd-93f3-6fec892cdb11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.123780 4758 scope.go:117] "RemoveContainer" containerID="8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" Jan 30 09:29:44 crc kubenswrapper[4758]: E0130 09:29:44.124973 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9\": container with ID starting with 8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9 not found: ID does not exist" containerID="8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.125031 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9"} err="failed to get container status \"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9\": rpc error: code = NotFound desc = could not find container \"8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9\": container with ID starting with 8f623f2127cacf38235f90faea0af016f928e53aa0027da30e3601f78d9ba7f9 not found: ID does not exist" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.125087 4758 scope.go:117] "RemoveContainer" containerID="13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463" Jan 30 09:29:44 crc kubenswrapper[4758]: E0130 09:29:44.126472 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463\": container with ID starting with 13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463 not found: ID does not exist" containerID="13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.126501 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463"} err="failed to get container status \"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463\": rpc error: code = NotFound desc = could not find container \"13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463\": container with ID starting with 13698f66df89aad055541f156203f49a717a1ffa1ed20975350a1b9181644463 not found: ID does not exist" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.126521 4758 scope.go:117] "RemoveContainer" containerID="bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987" Jan 30 09:29:44 crc kubenswrapper[4758]: E0130 09:29:44.127065 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987\": container with ID starting with bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987 not found: ID does not exist" containerID="bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.127102 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987"} err="failed to get container status \"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987\": rpc error: code = NotFound desc = could not find container \"bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987\": container with ID starting with bf0723d7aaf4fd16c04b7c0ca385f11094c2d26d37c3c09402a942fbf1217987 not found: ID does not exist" Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.367214 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:29:44 crc kubenswrapper[4758]: I0130 09:29:44.373482 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4zt4"] Jan 30 09:29:45 crc kubenswrapper[4758]: I0130 09:29:45.775402 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:45 crc kubenswrapper[4758]: E0130 09:29:45.775882 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:29:45 crc kubenswrapper[4758]: I0130 09:29:45.780949 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" path="/var/lib/kubelet/pods/99afc7c0-1f67-4dcd-93f3-6fec892cdb11/volumes" Jan 30 09:29:56 crc kubenswrapper[4758]: I0130 09:29:56.768369 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:29:56 crc kubenswrapper[4758]: E0130 09:29:56.768954 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.175586 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5"] Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176359 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176376 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176397 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176405 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176420 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176430 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176450 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176459 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="extract-utilities" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176473 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176481 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4758]: E0130 09:30:00.176499 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176506 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="extract-content" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176719 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="99afc7c0-1f67-4dcd-93f3-6fec892cdb11" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.176753 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc6c17c-d7f2-4f48-86a1-74d2013747d0" containerName="registry-server" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.177513 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.192661 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5"] Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.195474 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.196144 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.237704 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.238211 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.238270 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.339963 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.340023 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.340080 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.341331 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.349998 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.361077 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") pod \"collect-profiles-29496090-ckdl5\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:00 crc kubenswrapper[4758]: I0130 09:30:00.510536 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:01 crc kubenswrapper[4758]: I0130 09:30:01.064430 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5"] Jan 30 09:30:01 crc kubenswrapper[4758]: W0130 09:30:01.070467 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb6a254_a4f7_410f_a4fa_018580518f25.slice/crio-5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e WatchSource:0}: Error finding container 5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e: Status 404 returned error can't find the container with id 5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e Jan 30 09:30:01 crc kubenswrapper[4758]: I0130 09:30:01.191760 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" event={"ID":"7eb6a254-a4f7-410f-a4fa-018580518f25","Type":"ContainerStarted","Data":"5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e"} Jan 30 09:30:02 crc kubenswrapper[4758]: I0130 09:30:02.201614 4758 generic.go:334] "Generic (PLEG): container finished" podID="7eb6a254-a4f7-410f-a4fa-018580518f25" containerID="a3f6ac664b00b1f5b33d8b6858ec6fe6a87aa7b1e4e6ff49e0d26c386269eb16" exitCode=0 Jan 30 09:30:02 crc kubenswrapper[4758]: I0130 09:30:02.201966 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" event={"ID":"7eb6a254-a4f7-410f-a4fa-018580518f25","Type":"ContainerDied","Data":"a3f6ac664b00b1f5b33d8b6858ec6fe6a87aa7b1e4e6ff49e0d26c386269eb16"} Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.685486 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.824838 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") pod \"7eb6a254-a4f7-410f-a4fa-018580518f25\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.824977 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") pod \"7eb6a254-a4f7-410f-a4fa-018580518f25\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.825086 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") pod \"7eb6a254-a4f7-410f-a4fa-018580518f25\" (UID: \"7eb6a254-a4f7-410f-a4fa-018580518f25\") " Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.828247 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume" (OuterVolumeSpecName: "config-volume") pod "7eb6a254-a4f7-410f-a4fa-018580518f25" (UID: "7eb6a254-a4f7-410f-a4fa-018580518f25"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.847568 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf" (OuterVolumeSpecName: "kube-api-access-ttgnf") pod "7eb6a254-a4f7-410f-a4fa-018580518f25" (UID: "7eb6a254-a4f7-410f-a4fa-018580518f25"). InnerVolumeSpecName "kube-api-access-ttgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.858628 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7eb6a254-a4f7-410f-a4fa-018580518f25" (UID: "7eb6a254-a4f7-410f-a4fa-018580518f25"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.927848 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7eb6a254-a4f7-410f-a4fa-018580518f25-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.927879 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb6a254-a4f7-410f-a4fa-018580518f25-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:03 crc kubenswrapper[4758]: I0130 09:30:03.928150 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttgnf\" (UniqueName: \"kubernetes.io/projected/7eb6a254-a4f7-410f-a4fa-018580518f25-kube-api-access-ttgnf\") on node \"crc\" DevicePath \"\"" Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.220267 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" event={"ID":"7eb6a254-a4f7-410f-a4fa-018580518f25","Type":"ContainerDied","Data":"5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e"} Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.220313 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c86f238ef52545f49dfe2a30f2f4e6d74f73e71178b54660a89bf28d3d5b23e" Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.220338 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496090-ckdl5" Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.788998 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 09:30:04 crc kubenswrapper[4758]: I0130 09:30:04.796891 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496045-qgd87"] Jan 30 09:30:05 crc kubenswrapper[4758]: I0130 09:30:05.782402 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76da59ec-7916-4d09-8154-61e9848aaec6" path="/var/lib/kubelet/pods/76da59ec-7916-4d09-8154-61e9848aaec6/volumes" Jan 30 09:30:08 crc kubenswrapper[4758]: I0130 09:30:08.768791 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:08 crc kubenswrapper[4758]: E0130 09:30:08.769497 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:10 crc kubenswrapper[4758]: I0130 09:30:10.644278 4758 scope.go:117] "RemoveContainer" containerID="60e0a5bbfedb1bfda46d2d92720410ff62ddecd2757f775d7598cb6c8cd22199" Jan 30 09:30:20 crc kubenswrapper[4758]: I0130 09:30:20.768772 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:20 crc kubenswrapper[4758]: E0130 09:30:20.769596 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:31 crc kubenswrapper[4758]: I0130 09:30:31.768208 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:31 crc kubenswrapper[4758]: E0130 09:30:31.769025 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:44 crc kubenswrapper[4758]: I0130 09:30:44.768494 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:44 crc kubenswrapper[4758]: E0130 09:30:44.769237 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:30:57 crc kubenswrapper[4758]: I0130 09:30:57.769532 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:30:57 crc kubenswrapper[4758]: E0130 09:30:57.770907 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:09 crc kubenswrapper[4758]: I0130 09:31:09.768956 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:09 crc kubenswrapper[4758]: E0130 09:31:09.769773 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:20 crc kubenswrapper[4758]: I0130 09:31:20.769289 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:20 crc kubenswrapper[4758]: E0130 09:31:20.770068 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:34 crc kubenswrapper[4758]: I0130 09:31:34.768412 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:34 crc kubenswrapper[4758]: E0130 09:31:34.769083 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:45 crc kubenswrapper[4758]: I0130 09:31:45.776848 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:45 crc kubenswrapper[4758]: E0130 09:31:45.778058 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:31:57 crc kubenswrapper[4758]: I0130 09:31:57.768998 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:31:57 crc kubenswrapper[4758]: E0130 09:31:57.771253 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:32:09 crc kubenswrapper[4758]: I0130 09:32:09.768828 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:32:09 crc kubenswrapper[4758]: E0130 09:32:09.769640 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:32:23 crc kubenswrapper[4758]: I0130 09:32:23.769139 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:32:23 crc kubenswrapper[4758]: E0130 09:32:23.769992 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:32:37 crc kubenswrapper[4758]: I0130 09:32:37.769517 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:32:37 crc kubenswrapper[4758]: E0130 09:32:37.770484 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:32:48 crc kubenswrapper[4758]: I0130 09:32:48.769663 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:32:48 crc kubenswrapper[4758]: E0130 09:32:48.770471 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:03 crc kubenswrapper[4758]: I0130 09:33:03.769591 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:03 crc kubenswrapper[4758]: E0130 09:33:03.770434 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:18 crc kubenswrapper[4758]: I0130 09:33:18.769445 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:18 crc kubenswrapper[4758]: E0130 09:33:18.770772 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:31 crc kubenswrapper[4758]: I0130 09:33:31.770457 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:31 crc kubenswrapper[4758]: E0130 09:33:31.772578 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:43 crc kubenswrapper[4758]: I0130 09:33:43.773093 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:43 crc kubenswrapper[4758]: E0130 09:33:43.773935 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:33:58 crc kubenswrapper[4758]: I0130 09:33:58.768454 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:33:59 crc kubenswrapper[4758]: I0130 09:33:59.264458 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643"} Jan 30 09:35:23 crc kubenswrapper[4758]: I0130 09:35:23.216353 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-75f5775999-fhl5h" podUID="c2358e5c-db98-4b7b-8b6c-2e83132655a9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.430870 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:35:58 crc kubenswrapper[4758]: E0130 09:35:58.437171 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb6a254-a4f7-410f-a4fa-018580518f25" containerName="collect-profiles" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.438570 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb6a254-a4f7-410f-a4fa-018580518f25" containerName="collect-profiles" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.439080 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb6a254-a4f7-410f-a4fa-018580518f25" containerName="collect-profiles" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.441076 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.451655 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.515476 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.515556 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.515605 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.616945 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.616997 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.617029 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.617515 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.622599 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.647544 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") pod \"community-operators-x4m4x\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:58 crc kubenswrapper[4758]: I0130 09:35:58.817488 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:35:59 crc kubenswrapper[4758]: I0130 09:35:59.433332 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:36:00 crc kubenswrapper[4758]: I0130 09:36:00.296956 4758 generic.go:334] "Generic (PLEG): container finished" podID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerID="366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052" exitCode=0 Jan 30 09:36:00 crc kubenswrapper[4758]: I0130 09:36:00.297013 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerDied","Data":"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052"} Jan 30 09:36:00 crc kubenswrapper[4758]: I0130 09:36:00.297085 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerStarted","Data":"209a026d27a75bc017328d882a6ebbaff9f5b2b309f32daabefef59f4a7e5def"} Jan 30 09:36:00 crc kubenswrapper[4758]: I0130 09:36:00.299085 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:36:02 crc kubenswrapper[4758]: I0130 09:36:02.318586 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerStarted","Data":"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde"} Jan 30 09:36:03 crc kubenswrapper[4758]: I0130 09:36:03.328357 4758 generic.go:334] "Generic (PLEG): container finished" podID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerID="6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde" exitCode=0 Jan 30 09:36:03 crc kubenswrapper[4758]: I0130 09:36:03.328446 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerDied","Data":"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde"} Jan 30 09:36:04 crc kubenswrapper[4758]: I0130 09:36:04.338958 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerStarted","Data":"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d"} Jan 30 09:36:04 crc kubenswrapper[4758]: I0130 09:36:04.378953 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x4m4x" podStartSLOduration=2.926966293 podStartE2EDuration="6.378927429s" podCreationTimestamp="2026-01-30 09:35:58 +0000 UTC" firstStartedPulling="2026-01-30 09:36:00.298779778 +0000 UTC m=+3965.271091329" lastFinishedPulling="2026-01-30 09:36:03.750740914 +0000 UTC m=+3968.723052465" observedRunningTime="2026-01-30 09:36:04.366443276 +0000 UTC m=+3969.338754837" watchObservedRunningTime="2026-01-30 09:36:04.378927429 +0000 UTC m=+3969.351238980" Jan 30 09:36:08 crc kubenswrapper[4758]: I0130 09:36:08.818313 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:08 crc kubenswrapper[4758]: I0130 09:36:08.818786 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:08 crc kubenswrapper[4758]: I0130 09:36:08.867903 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:09 crc kubenswrapper[4758]: I0130 09:36:09.430512 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:09 crc kubenswrapper[4758]: I0130 09:36:09.500032 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:36:11 crc kubenswrapper[4758]: I0130 09:36:11.396718 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x4m4x" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="registry-server" containerID="cri-o://4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" gracePeriod=2 Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.203174 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.279734 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") pod \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.279954 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") pod \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.280167 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") pod \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\" (UID: \"debfcde3-b34c-4223-aaf0-c39aa0f5f015\") " Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.280707 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities" (OuterVolumeSpecName: "utilities") pod "debfcde3-b34c-4223-aaf0-c39aa0f5f015" (UID: "debfcde3-b34c-4223-aaf0-c39aa0f5f015"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.286312 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6" (OuterVolumeSpecName: "kube-api-access-zjgx6") pod "debfcde3-b34c-4223-aaf0-c39aa0f5f015" (UID: "debfcde3-b34c-4223-aaf0-c39aa0f5f015"). InnerVolumeSpecName "kube-api-access-zjgx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.346261 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "debfcde3-b34c-4223-aaf0-c39aa0f5f015" (UID: "debfcde3-b34c-4223-aaf0-c39aa0f5f015"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.383103 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjgx6\" (UniqueName: \"kubernetes.io/projected/debfcde3-b34c-4223-aaf0-c39aa0f5f015-kube-api-access-zjgx6\") on node \"crc\" DevicePath \"\"" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.383143 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.383155 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debfcde3-b34c-4223-aaf0-c39aa0f5f015-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409446 4758 generic.go:334] "Generic (PLEG): container finished" podID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerID="4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" exitCode=0 Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409526 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerDied","Data":"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d"} Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409574 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4m4x" event={"ID":"debfcde3-b34c-4223-aaf0-c39aa0f5f015","Type":"ContainerDied","Data":"209a026d27a75bc017328d882a6ebbaff9f5b2b309f32daabefef59f4a7e5def"} Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409578 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4m4x" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.409592 4758 scope.go:117] "RemoveContainer" containerID="4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.439233 4758 scope.go:117] "RemoveContainer" containerID="6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.448363 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.458111 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x4m4x"] Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.468400 4758 scope.go:117] "RemoveContainer" containerID="366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.508384 4758 scope.go:117] "RemoveContainer" containerID="4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" Jan 30 09:36:12 crc kubenswrapper[4758]: E0130 09:36:12.508795 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d\": container with ID starting with 4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d not found: ID does not exist" containerID="4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.508843 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d"} err="failed to get container status \"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d\": rpc error: code = NotFound desc = could not find container \"4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d\": container with ID starting with 4cb4dda5ea41b96a21256d403e7f817e52e8438f8f8f8e810200b76707cee38d not found: ID does not exist" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.508876 4758 scope.go:117] "RemoveContainer" containerID="6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde" Jan 30 09:36:12 crc kubenswrapper[4758]: E0130 09:36:12.509286 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde\": container with ID starting with 6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde not found: ID does not exist" containerID="6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.509324 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde"} err="failed to get container status \"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde\": rpc error: code = NotFound desc = could not find container \"6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde\": container with ID starting with 6d3032c2d7cb99ca6d0b440338d1dfedae7ec70a25b7d0914d7eef809e5c0bde not found: ID does not exist" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.509349 4758 scope.go:117] "RemoveContainer" containerID="366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052" Jan 30 09:36:12 crc kubenswrapper[4758]: E0130 09:36:12.509684 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052\": container with ID starting with 366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052 not found: ID does not exist" containerID="366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052" Jan 30 09:36:12 crc kubenswrapper[4758]: I0130 09:36:12.509706 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052"} err="failed to get container status \"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052\": rpc error: code = NotFound desc = could not find container \"366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052\": container with ID starting with 366affd7d6931b4a608cbf194489bd216eabdb6bf7b00d309355a0ce640fd052 not found: ID does not exist" Jan 30 09:36:13 crc kubenswrapper[4758]: I0130 09:36:13.779378 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" path="/var/lib/kubelet/pods/debfcde3-b34c-4223-aaf0-c39aa0f5f015/volumes" Jan 30 09:36:22 crc kubenswrapper[4758]: I0130 09:36:22.387426 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:36:22 crc kubenswrapper[4758]: I0130 09:36:22.388011 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:36:52 crc kubenswrapper[4758]: I0130 09:36:52.387119 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:36:52 crc kubenswrapper[4758]: I0130 09:36:52.388570 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.387586 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.388164 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.388222 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.389076 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:37:22 crc kubenswrapper[4758]: I0130 09:37:22.389159 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643" gracePeriod=600 Jan 30 09:37:23 crc kubenswrapper[4758]: I0130 09:37:23.006739 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643" exitCode=0 Jan 30 09:37:23 crc kubenswrapper[4758]: I0130 09:37:23.007129 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643"} Jan 30 09:37:23 crc kubenswrapper[4758]: I0130 09:37:23.007157 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32"} Jan 30 09:37:23 crc kubenswrapper[4758]: I0130 09:37:23.007174 4758 scope.go:117] "RemoveContainer" containerID="ebe2300a0f162f6257aa765f481ce3e43d0bd290ddb9ec2e738e7047e41be4a1" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.647444 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:38:55 crc kubenswrapper[4758]: E0130 09:38:55.648361 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="extract-utilities" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.648376 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="extract-utilities" Jan 30 09:38:55 crc kubenswrapper[4758]: E0130 09:38:55.648391 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="registry-server" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.648397 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="registry-server" Jan 30 09:38:55 crc kubenswrapper[4758]: E0130 09:38:55.648426 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="extract-content" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.648432 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="extract-content" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.648615 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="debfcde3-b34c-4223-aaf0-c39aa0f5f015" containerName="registry-server" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.650051 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.659986 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.786243 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.786441 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.786464 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.888175 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.888521 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.888583 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.888743 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.889064 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.922902 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") pod \"redhat-marketplace-9qdgq\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:55 crc kubenswrapper[4758]: I0130 09:38:55.966956 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:38:56 crc kubenswrapper[4758]: I0130 09:38:56.555065 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:38:56 crc kubenswrapper[4758]: I0130 09:38:56.845998 4758 generic.go:334] "Generic (PLEG): container finished" podID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerID="d8918a87876ba52cb2581f9ce297e46ecf37f213aa8947f1980878195117fc95" exitCode=0 Jan 30 09:38:56 crc kubenswrapper[4758]: I0130 09:38:56.846075 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerDied","Data":"d8918a87876ba52cb2581f9ce297e46ecf37f213aa8947f1980878195117fc95"} Jan 30 09:38:56 crc kubenswrapper[4758]: I0130 09:38:56.846357 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerStarted","Data":"1ddd4b0cc014a730c56bbafd15e932f6bf0bfa269ab021df192b18ce3276f9eb"} Jan 30 09:38:57 crc kubenswrapper[4758]: I0130 09:38:57.855400 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerStarted","Data":"5d27cbf8f81a66706291c23886ada6ce306f1f0c39f054a93f1814f07bec9400"} Jan 30 09:38:58 crc kubenswrapper[4758]: I0130 09:38:58.865574 4758 generic.go:334] "Generic (PLEG): container finished" podID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerID="5d27cbf8f81a66706291c23886ada6ce306f1f0c39f054a93f1814f07bec9400" exitCode=0 Jan 30 09:38:58 crc kubenswrapper[4758]: I0130 09:38:58.865898 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerDied","Data":"5d27cbf8f81a66706291c23886ada6ce306f1f0c39f054a93f1814f07bec9400"} Jan 30 09:38:59 crc kubenswrapper[4758]: I0130 09:38:59.875791 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerStarted","Data":"ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86"} Jan 30 09:38:59 crc kubenswrapper[4758]: I0130 09:38:59.902430 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9qdgq" podStartSLOduration=2.517725361 podStartE2EDuration="4.9024125s" podCreationTimestamp="2026-01-30 09:38:55 +0000 UTC" firstStartedPulling="2026-01-30 09:38:56.847524374 +0000 UTC m=+4141.819835935" lastFinishedPulling="2026-01-30 09:38:59.232211523 +0000 UTC m=+4144.204523074" observedRunningTime="2026-01-30 09:38:59.893345095 +0000 UTC m=+4144.865656666" watchObservedRunningTime="2026-01-30 09:38:59.9024125 +0000 UTC m=+4144.874724051" Jan 30 09:39:05 crc kubenswrapper[4758]: I0130 09:39:05.967860 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:05 crc kubenswrapper[4758]: I0130 09:39:05.968430 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:06 crc kubenswrapper[4758]: I0130 09:39:06.024166 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:07 crc kubenswrapper[4758]: I0130 09:39:07.029146 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:07 crc kubenswrapper[4758]: I0130 09:39:07.091578 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:39:08 crc kubenswrapper[4758]: I0130 09:39:08.957470 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9qdgq" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="registry-server" containerID="cri-o://ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86" gracePeriod=2 Jan 30 09:39:09 crc kubenswrapper[4758]: I0130 09:39:09.971496 4758 generic.go:334] "Generic (PLEG): container finished" podID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerID="ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86" exitCode=0 Jan 30 09:39:09 crc kubenswrapper[4758]: I0130 09:39:09.971586 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerDied","Data":"ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86"} Jan 30 09:39:09 crc kubenswrapper[4758]: I0130 09:39:09.971850 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qdgq" event={"ID":"8cb0f705-65da-4841-a48b-0dd8e895ee79","Type":"ContainerDied","Data":"1ddd4b0cc014a730c56bbafd15e932f6bf0bfa269ab021df192b18ce3276f9eb"} Jan 30 09:39:09 crc kubenswrapper[4758]: I0130 09:39:09.971867 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ddd4b0cc014a730c56bbafd15e932f6bf0bfa269ab021df192b18ce3276f9eb" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.077071 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.222664 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") pod \"8cb0f705-65da-4841-a48b-0dd8e895ee79\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.222856 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") pod \"8cb0f705-65da-4841-a48b-0dd8e895ee79\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.222944 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") pod \"8cb0f705-65da-4841-a48b-0dd8e895ee79\" (UID: \"8cb0f705-65da-4841-a48b-0dd8e895ee79\") " Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.231475 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk" (OuterVolumeSpecName: "kube-api-access-4lvwk") pod "8cb0f705-65da-4841-a48b-0dd8e895ee79" (UID: "8cb0f705-65da-4841-a48b-0dd8e895ee79"). InnerVolumeSpecName "kube-api-access-4lvwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.235264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities" (OuterVolumeSpecName: "utilities") pod "8cb0f705-65da-4841-a48b-0dd8e895ee79" (UID: "8cb0f705-65da-4841-a48b-0dd8e895ee79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.247676 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cb0f705-65da-4841-a48b-0dd8e895ee79" (UID: "8cb0f705-65da-4841-a48b-0dd8e895ee79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.325591 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lvwk\" (UniqueName: \"kubernetes.io/projected/8cb0f705-65da-4841-a48b-0dd8e895ee79-kube-api-access-4lvwk\") on node \"crc\" DevicePath \"\"" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.325635 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.325647 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb0f705-65da-4841-a48b-0dd8e895ee79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:39:10 crc kubenswrapper[4758]: I0130 09:39:10.983764 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qdgq" Jan 30 09:39:11 crc kubenswrapper[4758]: I0130 09:39:11.019456 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:39:11 crc kubenswrapper[4758]: I0130 09:39:11.029763 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qdgq"] Jan 30 09:39:11 crc kubenswrapper[4758]: I0130 09:39:11.779536 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" path="/var/lib/kubelet/pods/8cb0f705-65da-4841-a48b-0dd8e895ee79/volumes" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.357009 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:39:16 crc kubenswrapper[4758]: E0130 09:39:16.358027 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="registry-server" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.358075 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="registry-server" Jan 30 09:39:16 crc kubenswrapper[4758]: E0130 09:39:16.358091 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="extract-content" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.358098 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="extract-content" Jan 30 09:39:16 crc kubenswrapper[4758]: E0130 09:39:16.358129 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="extract-utilities" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.358137 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="extract-utilities" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.358322 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb0f705-65da-4841-a48b-0dd8e895ee79" containerName="registry-server" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.359621 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.382744 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.440423 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.440598 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.440649 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.542504 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.542557 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.542671 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.543082 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.543122 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.571236 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") pod \"redhat-operators-vd6gl\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:16 crc kubenswrapper[4758]: I0130 09:39:16.678403 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:18 crc kubenswrapper[4758]: I0130 09:39:18.250197 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:39:19 crc kubenswrapper[4758]: I0130 09:39:19.046542 4758 generic.go:334] "Generic (PLEG): container finished" podID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerID="5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90" exitCode=0 Jan 30 09:39:19 crc kubenswrapper[4758]: I0130 09:39:19.046545 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerDied","Data":"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90"} Jan 30 09:39:19 crc kubenswrapper[4758]: I0130 09:39:19.047077 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerStarted","Data":"a3647e27c55a8e5e47fb5f033751f32f8b22438687007a22c34f8b75b9fb5247"} Jan 30 09:39:20 crc kubenswrapper[4758]: I0130 09:39:20.056454 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerStarted","Data":"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e"} Jan 30 09:39:22 crc kubenswrapper[4758]: I0130 09:39:22.387571 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:39:22 crc kubenswrapper[4758]: I0130 09:39:22.388156 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:39:26 crc kubenswrapper[4758]: I0130 09:39:26.104148 4758 generic.go:334] "Generic (PLEG): container finished" podID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerID="0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e" exitCode=0 Jan 30 09:39:26 crc kubenswrapper[4758]: I0130 09:39:26.104253 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerDied","Data":"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e"} Jan 30 09:39:27 crc kubenswrapper[4758]: I0130 09:39:27.115325 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerStarted","Data":"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507"} Jan 30 09:39:27 crc kubenswrapper[4758]: I0130 09:39:27.132455 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vd6gl" podStartSLOduration=3.609219744 podStartE2EDuration="11.132434041s" podCreationTimestamp="2026-01-30 09:39:16 +0000 UTC" firstStartedPulling="2026-01-30 09:39:19.048325137 +0000 UTC m=+4164.020636688" lastFinishedPulling="2026-01-30 09:39:26.571539434 +0000 UTC m=+4171.543850985" observedRunningTime="2026-01-30 09:39:27.13050252 +0000 UTC m=+4172.102814091" watchObservedRunningTime="2026-01-30 09:39:27.132434041 +0000 UTC m=+4172.104745592" Jan 30 09:39:36 crc kubenswrapper[4758]: I0130 09:39:36.679770 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:36 crc kubenswrapper[4758]: I0130 09:39:36.680438 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:39:37 crc kubenswrapper[4758]: I0130 09:39:37.732173 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd6gl" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" probeResult="failure" output=< Jan 30 09:39:37 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:39:37 crc kubenswrapper[4758]: > Jan 30 09:39:41 crc kubenswrapper[4758]: I0130 09:39:41.801385 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0a15517a-ff48-40d1-91b4-442bfef91fc1" containerName="galera" probeResult="failure" output="command timed out" Jan 30 09:39:47 crc kubenswrapper[4758]: I0130 09:39:47.736779 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd6gl" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" probeResult="failure" output=< Jan 30 09:39:47 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:39:47 crc kubenswrapper[4758]: > Jan 30 09:39:52 crc kubenswrapper[4758]: I0130 09:39:52.387577 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:39:52 crc kubenswrapper[4758]: I0130 09:39:52.388275 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:39:57 crc kubenswrapper[4758]: I0130 09:39:57.724266 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd6gl" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" probeResult="failure" output=< Jan 30 09:39:57 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:39:57 crc kubenswrapper[4758]: > Jan 30 09:40:06 crc kubenswrapper[4758]: I0130 09:40:06.738074 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:40:06 crc kubenswrapper[4758]: I0130 09:40:06.810948 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:40:06 crc kubenswrapper[4758]: I0130 09:40:06.977742 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:40:07 crc kubenswrapper[4758]: I0130 09:40:07.813460 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vd6gl" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" containerID="cri-o://a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" gracePeriod=2 Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.395447 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.524841 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") pod \"d4b53aed-fb8c-44ae-a56e-57109de1e728\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.524960 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") pod \"d4b53aed-fb8c-44ae-a56e-57109de1e728\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.525006 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") pod \"d4b53aed-fb8c-44ae-a56e-57109de1e728\" (UID: \"d4b53aed-fb8c-44ae-a56e-57109de1e728\") " Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.525872 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities" (OuterVolumeSpecName: "utilities") pod "d4b53aed-fb8c-44ae-a56e-57109de1e728" (UID: "d4b53aed-fb8c-44ae-a56e-57109de1e728"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.534892 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt" (OuterVolumeSpecName: "kube-api-access-5xrvt") pod "d4b53aed-fb8c-44ae-a56e-57109de1e728" (UID: "d4b53aed-fb8c-44ae-a56e-57109de1e728"). InnerVolumeSpecName "kube-api-access-5xrvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.627192 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xrvt\" (UniqueName: \"kubernetes.io/projected/d4b53aed-fb8c-44ae-a56e-57109de1e728-kube-api-access-5xrvt\") on node \"crc\" DevicePath \"\"" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.627222 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.647748 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b53aed-fb8c-44ae-a56e-57109de1e728" (UID: "d4b53aed-fb8c-44ae-a56e-57109de1e728"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.729602 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b53aed-fb8c-44ae-a56e-57109de1e728-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823331 4758 generic.go:334] "Generic (PLEG): container finished" podID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerID="a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" exitCode=0 Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerDied","Data":"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507"} Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823397 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd6gl" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823427 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd6gl" event={"ID":"d4b53aed-fb8c-44ae-a56e-57109de1e728","Type":"ContainerDied","Data":"a3647e27c55a8e5e47fb5f033751f32f8b22438687007a22c34f8b75b9fb5247"} Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.823451 4758 scope.go:117] "RemoveContainer" containerID="a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.849527 4758 scope.go:117] "RemoveContainer" containerID="0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.858376 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.869648 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vd6gl"] Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.881421 4758 scope.go:117] "RemoveContainer" containerID="5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.916844 4758 scope.go:117] "RemoveContainer" containerID="a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" Jan 30 09:40:08 crc kubenswrapper[4758]: E0130 09:40:08.917310 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507\": container with ID starting with a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507 not found: ID does not exist" containerID="a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.917398 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507"} err="failed to get container status \"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507\": rpc error: code = NotFound desc = could not find container \"a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507\": container with ID starting with a50995e53fb044dc49a486dc1759f0af667ae4dcb144643ed0be72daac30e507 not found: ID does not exist" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.917489 4758 scope.go:117] "RemoveContainer" containerID="0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e" Jan 30 09:40:08 crc kubenswrapper[4758]: E0130 09:40:08.917986 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e\": container with ID starting with 0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e not found: ID does not exist" containerID="0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.918016 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e"} err="failed to get container status \"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e\": rpc error: code = NotFound desc = could not find container \"0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e\": container with ID starting with 0b5b92e1808207eb0d53017a5c6382a0923e7ec88960f9d623f5a7c6e429c46e not found: ID does not exist" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.918049 4758 scope.go:117] "RemoveContainer" containerID="5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90" Jan 30 09:40:08 crc kubenswrapper[4758]: E0130 09:40:08.918280 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90\": container with ID starting with 5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90 not found: ID does not exist" containerID="5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90" Jan 30 09:40:08 crc kubenswrapper[4758]: I0130 09:40:08.918361 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90"} err="failed to get container status \"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90\": rpc error: code = NotFound desc = could not find container \"5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90\": container with ID starting with 5e7ca4bad86e64de5629bba5c2d67a97015c1a34cdf8157b5aed7da845ae3c90 not found: ID does not exist" Jan 30 09:40:09 crc kubenswrapper[4758]: I0130 09:40:09.778953 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" path="/var/lib/kubelet/pods/d4b53aed-fb8c-44ae-a56e-57109de1e728/volumes" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.387482 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.388440 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.388498 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.389139 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.389208 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" gracePeriod=600 Jan 30 09:40:22 crc kubenswrapper[4758]: E0130 09:40:22.538839 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.951513 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" exitCode=0 Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.951819 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32"} Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.951852 4758 scope.go:117] "RemoveContainer" containerID="f621d5061ef980f9a3d7bdb05f6303bc701eff4159f79eca90a822b5aba48643" Jan 30 09:40:22 crc kubenswrapper[4758]: I0130 09:40:22.952486 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:40:22 crc kubenswrapper[4758]: E0130 09:40:22.952733 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:40:35 crc kubenswrapper[4758]: I0130 09:40:35.774919 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:40:35 crc kubenswrapper[4758]: E0130 09:40:35.775693 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:40:50 crc kubenswrapper[4758]: I0130 09:40:50.768438 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:40:50 crc kubenswrapper[4758]: E0130 09:40:50.769332 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:04 crc kubenswrapper[4758]: I0130 09:41:04.769232 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:04 crc kubenswrapper[4758]: E0130 09:41:04.770589 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:16 crc kubenswrapper[4758]: I0130 09:41:16.768676 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:16 crc kubenswrapper[4758]: E0130 09:41:16.769408 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:28 crc kubenswrapper[4758]: I0130 09:41:28.768302 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:28 crc kubenswrapper[4758]: E0130 09:41:28.769726 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.359601 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:37 crc kubenswrapper[4758]: E0130 09:41:37.360729 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.360744 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" Jan 30 09:41:37 crc kubenswrapper[4758]: E0130 09:41:37.360757 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="extract-content" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.360763 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="extract-content" Jan 30 09:41:37 crc kubenswrapper[4758]: E0130 09:41:37.360792 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="extract-utilities" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.360800 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="extract-utilities" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.361003 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b53aed-fb8c-44ae-a56e-57109de1e728" containerName="registry-server" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.362636 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.371374 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.542477 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.543408 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.543541 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.645276 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.645389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.645419 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.645841 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.646296 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.666509 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") pod \"certified-operators-2jcgk\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:37 crc kubenswrapper[4758]: I0130 09:41:37.725399 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.302069 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.580119 4758 generic.go:334] "Generic (PLEG): container finished" podID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerID="cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995" exitCode=0 Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.580162 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerDied","Data":"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995"} Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.580191 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerStarted","Data":"273cfcdb73acd4276ef5b7f53190801a5e0944ad916fb68665013d4d4ff85c64"} Jan 30 09:41:38 crc kubenswrapper[4758]: I0130 09:41:38.581963 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:41:39 crc kubenswrapper[4758]: I0130 09:41:39.768772 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:39 crc kubenswrapper[4758]: E0130 09:41:39.769416 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:41:40 crc kubenswrapper[4758]: I0130 09:41:40.617160 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerStarted","Data":"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7"} Jan 30 09:41:41 crc kubenswrapper[4758]: I0130 09:41:41.627107 4758 generic.go:334] "Generic (PLEG): container finished" podID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerID="1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7" exitCode=0 Jan 30 09:41:41 crc kubenswrapper[4758]: I0130 09:41:41.627209 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerDied","Data":"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7"} Jan 30 09:41:42 crc kubenswrapper[4758]: I0130 09:41:42.641735 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerStarted","Data":"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19"} Jan 30 09:41:42 crc kubenswrapper[4758]: I0130 09:41:42.681067 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2jcgk" podStartSLOduration=2.230496544 podStartE2EDuration="5.681025762s" podCreationTimestamp="2026-01-30 09:41:37 +0000 UTC" firstStartedPulling="2026-01-30 09:41:38.581712784 +0000 UTC m=+4303.554024345" lastFinishedPulling="2026-01-30 09:41:42.032242022 +0000 UTC m=+4307.004553563" observedRunningTime="2026-01-30 09:41:42.66413699 +0000 UTC m=+4307.636448611" watchObservedRunningTime="2026-01-30 09:41:42.681025762 +0000 UTC m=+4307.653337323" Jan 30 09:41:47 crc kubenswrapper[4758]: I0130 09:41:47.726802 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:47 crc kubenswrapper[4758]: I0130 09:41:47.727372 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:47 crc kubenswrapper[4758]: I0130 09:41:47.793276 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:48 crc kubenswrapper[4758]: I0130 09:41:48.738500 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:48 crc kubenswrapper[4758]: I0130 09:41:48.788906 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:50 crc kubenswrapper[4758]: I0130 09:41:50.713845 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2jcgk" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="registry-server" containerID="cri-o://1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" gracePeriod=2 Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.186828 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.296723 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") pod \"604c3bea-aed3-47c1-907a-6353466ebd3d\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.296808 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") pod \"604c3bea-aed3-47c1-907a-6353466ebd3d\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.297315 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") pod \"604c3bea-aed3-47c1-907a-6353466ebd3d\" (UID: \"604c3bea-aed3-47c1-907a-6353466ebd3d\") " Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.298166 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities" (OuterVolumeSpecName: "utilities") pod "604c3bea-aed3-47c1-907a-6353466ebd3d" (UID: "604c3bea-aed3-47c1-907a-6353466ebd3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.307376 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf" (OuterVolumeSpecName: "kube-api-access-hq7cf") pod "604c3bea-aed3-47c1-907a-6353466ebd3d" (UID: "604c3bea-aed3-47c1-907a-6353466ebd3d"). InnerVolumeSpecName "kube-api-access-hq7cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.399791 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.399832 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq7cf\" (UniqueName: \"kubernetes.io/projected/604c3bea-aed3-47c1-907a-6353466ebd3d-kube-api-access-hq7cf\") on node \"crc\" DevicePath \"\"" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725174 4758 generic.go:334] "Generic (PLEG): container finished" podID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerID="1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" exitCode=0 Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725216 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerDied","Data":"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19"} Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725240 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jcgk" event={"ID":"604c3bea-aed3-47c1-907a-6353466ebd3d","Type":"ContainerDied","Data":"273cfcdb73acd4276ef5b7f53190801a5e0944ad916fb68665013d4d4ff85c64"} Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725236 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jcgk" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.725255 4758 scope.go:117] "RemoveContainer" containerID="1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.758974 4758 scope.go:117] "RemoveContainer" containerID="1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.791848 4758 scope.go:117] "RemoveContainer" containerID="cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.842598 4758 scope.go:117] "RemoveContainer" containerID="1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" Jan 30 09:41:51 crc kubenswrapper[4758]: E0130 09:41:51.843885 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19\": container with ID starting with 1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19 not found: ID does not exist" containerID="1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.843925 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19"} err="failed to get container status \"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19\": rpc error: code = NotFound desc = could not find container \"1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19\": container with ID starting with 1cf3ab8ed337178aeaf21322c45924f1749f11381f9bfeb73f6ecb6a61ccba19 not found: ID does not exist" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.843949 4758 scope.go:117] "RemoveContainer" containerID="1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7" Jan 30 09:41:51 crc kubenswrapper[4758]: E0130 09:41:51.844229 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7\": container with ID starting with 1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7 not found: ID does not exist" containerID="1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.844256 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7"} err="failed to get container status \"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7\": rpc error: code = NotFound desc = could not find container \"1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7\": container with ID starting with 1d905399aad208ef04d4f2e6ed156ce2a0db2c1dc8867a956435df23a98f3bb7 not found: ID does not exist" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.844273 4758 scope.go:117] "RemoveContainer" containerID="cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995" Jan 30 09:41:51 crc kubenswrapper[4758]: E0130 09:41:51.844761 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995\": container with ID starting with cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995 not found: ID does not exist" containerID="cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.844791 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995"} err="failed to get container status \"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995\": rpc error: code = NotFound desc = could not find container \"cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995\": container with ID starting with cc9d1e29d1c01a83d6f68b66618e648692208ed4ef76c15dbd6914674c9b8995 not found: ID does not exist" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.864803 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "604c3bea-aed3-47c1-907a-6353466ebd3d" (UID: "604c3bea-aed3-47c1-907a-6353466ebd3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:41:51 crc kubenswrapper[4758]: I0130 09:41:51.915463 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604c3bea-aed3-47c1-907a-6353466ebd3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:41:52 crc kubenswrapper[4758]: I0130 09:41:52.054756 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:52 crc kubenswrapper[4758]: I0130 09:41:52.065238 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2jcgk"] Jan 30 09:41:53 crc kubenswrapper[4758]: I0130 09:41:53.781559 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" path="/var/lib/kubelet/pods/604c3bea-aed3-47c1-907a-6353466ebd3d/volumes" Jan 30 09:41:54 crc kubenswrapper[4758]: I0130 09:41:54.768348 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:41:54 crc kubenswrapper[4758]: E0130 09:41:54.768867 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:05 crc kubenswrapper[4758]: I0130 09:42:05.774522 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:05 crc kubenswrapper[4758]: E0130 09:42:05.775358 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:16 crc kubenswrapper[4758]: I0130 09:42:16.768026 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:16 crc kubenswrapper[4758]: E0130 09:42:16.769785 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:31 crc kubenswrapper[4758]: I0130 09:42:31.769114 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:31 crc kubenswrapper[4758]: E0130 09:42:31.769902 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:43 crc kubenswrapper[4758]: I0130 09:42:43.769407 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:43 crc kubenswrapper[4758]: E0130 09:42:43.770273 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:42:58 crc kubenswrapper[4758]: I0130 09:42:58.770019 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:42:58 crc kubenswrapper[4758]: E0130 09:42:58.770903 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:43:10 crc kubenswrapper[4758]: I0130 09:43:10.769297 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:43:10 crc kubenswrapper[4758]: E0130 09:43:10.770352 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:43:23 crc kubenswrapper[4758]: I0130 09:43:23.769112 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:43:23 crc kubenswrapper[4758]: E0130 09:43:23.770116 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:43:38 crc kubenswrapper[4758]: I0130 09:43:38.768596 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:43:38 crc kubenswrapper[4758]: E0130 09:43:38.769315 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:43:50 crc kubenswrapper[4758]: I0130 09:43:50.768596 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:43:50 crc kubenswrapper[4758]: E0130 09:43:50.769521 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:04 crc kubenswrapper[4758]: I0130 09:44:04.769871 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:04 crc kubenswrapper[4758]: E0130 09:44:04.770708 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:17 crc kubenswrapper[4758]: I0130 09:44:17.768456 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:17 crc kubenswrapper[4758]: E0130 09:44:17.769314 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:28 crc kubenswrapper[4758]: I0130 09:44:28.769101 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:28 crc kubenswrapper[4758]: E0130 09:44:28.769945 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:43 crc kubenswrapper[4758]: I0130 09:44:43.768488 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:43 crc kubenswrapper[4758]: E0130 09:44:43.769309 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:44:55 crc kubenswrapper[4758]: I0130 09:44:55.778125 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:44:55 crc kubenswrapper[4758]: E0130 09:44:55.779332 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.177698 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9"] Jan 30 09:45:00 crc kubenswrapper[4758]: E0130 09:45:00.178617 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="extract-utilities" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.178633 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="extract-utilities" Jan 30 09:45:00 crc kubenswrapper[4758]: E0130 09:45:00.178650 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="extract-content" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.178658 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="extract-content" Jan 30 09:45:00 crc kubenswrapper[4758]: E0130 09:45:00.178681 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.178689 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.178922 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="604c3bea-aed3-47c1-907a-6353466ebd3d" containerName="registry-server" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.179696 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.182197 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.182227 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.201264 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9"] Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.287901 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.288451 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.288649 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.390671 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.390929 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.391014 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.391931 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.403170 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.412539 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") pod \"collect-profiles-29496105-rtlj9\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.499147 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:00 crc kubenswrapper[4758]: I0130 09:45:00.946477 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9"] Jan 30 09:45:01 crc kubenswrapper[4758]: I0130 09:45:01.507843 4758 generic.go:334] "Generic (PLEG): container finished" podID="2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" containerID="93889ac7c0f1bc37603be56029065806c7d4dfddcb4b8e399430cc4b379e1169" exitCode=0 Jan 30 09:45:01 crc kubenswrapper[4758]: I0130 09:45:01.507889 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" event={"ID":"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40","Type":"ContainerDied","Data":"93889ac7c0f1bc37603be56029065806c7d4dfddcb4b8e399430cc4b379e1169"} Jan 30 09:45:01 crc kubenswrapper[4758]: I0130 09:45:01.508181 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" event={"ID":"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40","Type":"ContainerStarted","Data":"28a7f75c279992a219a407e21608e01b03e79c103971aa64a29c9a234f414260"} Jan 30 09:45:02 crc kubenswrapper[4758]: I0130 09:45:02.913509 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.044989 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") pod \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.045208 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") pod \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.045239 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") pod \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\" (UID: \"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40\") " Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.046670 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" (UID: "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.053169 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq" (OuterVolumeSpecName: "kube-api-access-nrktq") pod "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" (UID: "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40"). InnerVolumeSpecName "kube-api-access-nrktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.053465 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" (UID: "2e2ef399-bb23-4c1b-8a4c-baaa621c5c40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.147719 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.147769 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrktq\" (UniqueName: \"kubernetes.io/projected/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-kube-api-access-nrktq\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.147778 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2ef399-bb23-4c1b-8a4c-baaa621c5c40-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.524775 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" event={"ID":"2e2ef399-bb23-4c1b-8a4c-baaa621c5c40","Type":"ContainerDied","Data":"28a7f75c279992a219a407e21608e01b03e79c103971aa64a29c9a234f414260"} Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.524821 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496105-rtlj9" Jan 30 09:45:03 crc kubenswrapper[4758]: I0130 09:45:03.525030 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a7f75c279992a219a407e21608e01b03e79c103971aa64a29c9a234f414260" Jan 30 09:45:04 crc kubenswrapper[4758]: I0130 09:45:04.008598 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:45:04 crc kubenswrapper[4758]: I0130 09:45:04.016392 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496060-f2nqx"] Jan 30 09:45:05 crc kubenswrapper[4758]: I0130 09:45:05.781329 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b22039-92b3-4488-a971-2913dedb64ed" path="/var/lib/kubelet/pods/e8b22039-92b3-4488-a971-2913dedb64ed/volumes" Jan 30 09:45:06 crc kubenswrapper[4758]: I0130 09:45:06.769151 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:45:06 crc kubenswrapper[4758]: E0130 09:45:06.769670 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:45:11 crc kubenswrapper[4758]: I0130 09:45:11.037262 4758 scope.go:117] "RemoveContainer" containerID="5d27cbf8f81a66706291c23886ada6ce306f1f0c39f054a93f1814f07bec9400" Jan 30 09:45:11 crc kubenswrapper[4758]: I0130 09:45:11.060184 4758 scope.go:117] "RemoveContainer" containerID="d8918a87876ba52cb2581f9ce297e46ecf37f213aa8947f1980878195117fc95" Jan 30 09:45:11 crc kubenswrapper[4758]: I0130 09:45:11.120222 4758 scope.go:117] "RemoveContainer" containerID="ac298c9dac818ad35a96cf20883fa275c13fe873d3e61744f0cb8d7965c21c86" Jan 30 09:45:11 crc kubenswrapper[4758]: I0130 09:45:11.159835 4758 scope.go:117] "RemoveContainer" containerID="04e032ddde7c1d4a35b13c1c04e206e7ca345c709ebf2a8892f46a64701c926c" Jan 30 09:45:18 crc kubenswrapper[4758]: I0130 09:45:18.768871 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:45:18 crc kubenswrapper[4758]: E0130 09:45:18.769628 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:45:30 crc kubenswrapper[4758]: I0130 09:45:30.769909 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:45:31 crc kubenswrapper[4758]: I0130 09:45:31.791618 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204"} Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.729357 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:11 crc kubenswrapper[4758]: E0130 09:46:11.730379 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" containerName="collect-profiles" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.730397 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" containerName="collect-profiles" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.730717 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2ef399-bb23-4c1b-8a4c-baaa621c5c40" containerName="collect-profiles" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.732383 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.749018 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.807984 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.808302 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.808331 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.909579 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.909652 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.909755 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.910392 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.910392 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:11 crc kubenswrapper[4758]: I0130 09:46:11.935942 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") pod \"community-operators-vmj29\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:12 crc kubenswrapper[4758]: I0130 09:46:12.051341 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:12 crc kubenswrapper[4758]: I0130 09:46:12.663912 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:13 crc kubenswrapper[4758]: I0130 09:46:13.134830 4758 generic.go:334] "Generic (PLEG): container finished" podID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerID="08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc" exitCode=0 Jan 30 09:46:13 crc kubenswrapper[4758]: I0130 09:46:13.134894 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerDied","Data":"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc"} Jan 30 09:46:13 crc kubenswrapper[4758]: I0130 09:46:13.135198 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerStarted","Data":"b0a7e44dbba51b78ce0570a5f21cc47fc7c7673c55ce981aae9b66a918c2e48a"} Jan 30 09:46:14 crc kubenswrapper[4758]: I0130 09:46:14.146368 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerStarted","Data":"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e"} Jan 30 09:46:16 crc kubenswrapper[4758]: I0130 09:46:16.166819 4758 generic.go:334] "Generic (PLEG): container finished" podID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerID="96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e" exitCode=0 Jan 30 09:46:16 crc kubenswrapper[4758]: I0130 09:46:16.166882 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerDied","Data":"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e"} Jan 30 09:46:17 crc kubenswrapper[4758]: I0130 09:46:17.177719 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerStarted","Data":"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12"} Jan 30 09:46:17 crc kubenswrapper[4758]: I0130 09:46:17.207602 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmj29" podStartSLOduration=2.750898953 podStartE2EDuration="6.207582231s" podCreationTimestamp="2026-01-30 09:46:11 +0000 UTC" firstStartedPulling="2026-01-30 09:46:13.136943986 +0000 UTC m=+4578.109255537" lastFinishedPulling="2026-01-30 09:46:16.593627264 +0000 UTC m=+4581.565938815" observedRunningTime="2026-01-30 09:46:17.196373578 +0000 UTC m=+4582.168685159" watchObservedRunningTime="2026-01-30 09:46:17.207582231 +0000 UTC m=+4582.179893812" Jan 30 09:46:22 crc kubenswrapper[4758]: I0130 09:46:22.052234 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:22 crc kubenswrapper[4758]: I0130 09:46:22.052956 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:23 crc kubenswrapper[4758]: I0130 09:46:23.206416 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vmj29" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" probeResult="failure" output=< Jan 30 09:46:23 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:46:23 crc kubenswrapper[4758]: > Jan 30 09:46:32 crc kubenswrapper[4758]: I0130 09:46:32.174923 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:32 crc kubenswrapper[4758]: I0130 09:46:32.223278 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:33 crc kubenswrapper[4758]: I0130 09:46:33.093030 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:33 crc kubenswrapper[4758]: I0130 09:46:33.327324 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmj29" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" containerID="cri-o://099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" gracePeriod=2 Jan 30 09:46:33 crc kubenswrapper[4758]: I0130 09:46:33.857355 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.016377 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") pod \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.016511 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") pod \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.016582 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") pod \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\" (UID: \"f58c80cf-e686-4eb0-bd9f-40aa842e2c34\") " Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.017539 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities" (OuterVolumeSpecName: "utilities") pod "f58c80cf-e686-4eb0-bd9f-40aa842e2c34" (UID: "f58c80cf-e686-4eb0-bd9f-40aa842e2c34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.020786 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.022593 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4" (OuterVolumeSpecName: "kube-api-access-qs5k4") pod "f58c80cf-e686-4eb0-bd9f-40aa842e2c34" (UID: "f58c80cf-e686-4eb0-bd9f-40aa842e2c34"). InnerVolumeSpecName "kube-api-access-qs5k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.097057 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f58c80cf-e686-4eb0-bd9f-40aa842e2c34" (UID: "f58c80cf-e686-4eb0-bd9f-40aa842e2c34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.122732 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs5k4\" (UniqueName: \"kubernetes.io/projected/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-kube-api-access-qs5k4\") on node \"crc\" DevicePath \"\"" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.122765 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c80cf-e686-4eb0-bd9f-40aa842e2c34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338104 4758 generic.go:334] "Generic (PLEG): container finished" podID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerID="099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" exitCode=0 Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338178 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerDied","Data":"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12"} Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338228 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmj29" event={"ID":"f58c80cf-e686-4eb0-bd9f-40aa842e2c34","Type":"ContainerDied","Data":"b0a7e44dbba51b78ce0570a5f21cc47fc7c7673c55ce981aae9b66a918c2e48a"} Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338228 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmj29" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.338265 4758 scope.go:117] "RemoveContainer" containerID="099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.363435 4758 scope.go:117] "RemoveContainer" containerID="96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.386287 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.389413 4758 scope.go:117] "RemoveContainer" containerID="08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.406148 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmj29"] Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429012 4758 scope.go:117] "RemoveContainer" containerID="099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" Jan 30 09:46:34 crc kubenswrapper[4758]: E0130 09:46:34.429464 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12\": container with ID starting with 099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12 not found: ID does not exist" containerID="099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429511 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12"} err="failed to get container status \"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12\": rpc error: code = NotFound desc = could not find container \"099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12\": container with ID starting with 099f87052896902710f7a79a1c3afe17726f97d3d8a6c90f4298e000e69fee12 not found: ID does not exist" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429540 4758 scope.go:117] "RemoveContainer" containerID="96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e" Jan 30 09:46:34 crc kubenswrapper[4758]: E0130 09:46:34.429791 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e\": container with ID starting with 96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e not found: ID does not exist" containerID="96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429810 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e"} err="failed to get container status \"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e\": rpc error: code = NotFound desc = could not find container \"96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e\": container with ID starting with 96f06740eac86550f8549e16e20598237683cec8adf7b8308972420285b1566e not found: ID does not exist" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.429822 4758 scope.go:117] "RemoveContainer" containerID="08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc" Jan 30 09:46:34 crc kubenswrapper[4758]: E0130 09:46:34.429988 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc\": container with ID starting with 08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc not found: ID does not exist" containerID="08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc" Jan 30 09:46:34 crc kubenswrapper[4758]: I0130 09:46:34.430006 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc"} err="failed to get container status \"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc\": rpc error: code = NotFound desc = could not find container \"08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc\": container with ID starting with 08897d82e747efb8fd3d06ccfbdb91840c3cc9dccc22b07b1cb190a624deb2fc not found: ID does not exist" Jan 30 09:46:35 crc kubenswrapper[4758]: I0130 09:46:35.779352 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" path="/var/lib/kubelet/pods/f58c80cf-e686-4eb0-bd9f-40aa842e2c34/volumes" Jan 30 09:47:52 crc kubenswrapper[4758]: I0130 09:47:52.387216 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:47:52 crc kubenswrapper[4758]: I0130 09:47:52.388950 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:48:22 crc kubenswrapper[4758]: I0130 09:48:22.387894 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:48:22 crc kubenswrapper[4758]: I0130 09:48:22.389942 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.387460 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.388152 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.388210 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.388984 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:48:52 crc kubenswrapper[4758]: I0130 09:48:52.389059 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204" gracePeriod=600 Jan 30 09:48:53 crc kubenswrapper[4758]: I0130 09:48:53.485419 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204" exitCode=0 Jan 30 09:48:53 crc kubenswrapper[4758]: I0130 09:48:53.485487 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204"} Jan 30 09:48:53 crc kubenswrapper[4758]: I0130 09:48:53.486054 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3"} Jan 30 09:48:53 crc kubenswrapper[4758]: I0130 09:48:53.486081 4758 scope.go:117] "RemoveContainer" containerID="d3668997e075e56c48b4a7cc28b129f62e77004b823df286307afa893090bd32" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.923961 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:22 crc kubenswrapper[4758]: E0130 09:50:22.925007 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.925023 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" Jan 30 09:50:22 crc kubenswrapper[4758]: E0130 09:50:22.925057 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="extract-utilities" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.925067 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="extract-utilities" Jan 30 09:50:22 crc kubenswrapper[4758]: E0130 09:50:22.925082 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="extract-content" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.925093 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="extract-content" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.925344 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58c80cf-e686-4eb0-bd9f-40aa842e2c34" containerName="registry-server" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.927127 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.951345 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.951745 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.951780 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:22 crc kubenswrapper[4758]: I0130 09:50:22.965776 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.053833 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.054010 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.054052 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.054398 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.054463 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.096549 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") pod \"redhat-marketplace-hkwlg\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.264306 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:23 crc kubenswrapper[4758]: I0130 09:50:23.779164 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:24 crc kubenswrapper[4758]: I0130 09:50:24.261170 4758 generic.go:334] "Generic (PLEG): container finished" podID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerID="35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307" exitCode=0 Jan 30 09:50:24 crc kubenswrapper[4758]: I0130 09:50:24.261488 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerDied","Data":"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307"} Jan 30 09:50:24 crc kubenswrapper[4758]: I0130 09:50:24.261567 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerStarted","Data":"d3f3f343b9c85fddae57464a7d2d4977951dbba479525623971ffc2368fc7490"} Jan 30 09:50:24 crc kubenswrapper[4758]: I0130 09:50:24.262834 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.271568 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerStarted","Data":"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093"} Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.292616 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.294402 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.323405 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.396728 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.396792 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.397052 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.498473 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.498526 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.498591 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.499148 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.499229 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.525084 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") pod \"redhat-operators-tn9df\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:25 crc kubenswrapper[4758]: I0130 09:50:25.614895 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:26 crc kubenswrapper[4758]: I0130 09:50:26.177645 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:50:26 crc kubenswrapper[4758]: I0130 09:50:26.280713 4758 generic.go:334] "Generic (PLEG): container finished" podID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerID="2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093" exitCode=0 Jan 30 09:50:26 crc kubenswrapper[4758]: I0130 09:50:26.280772 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerDied","Data":"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093"} Jan 30 09:50:26 crc kubenswrapper[4758]: I0130 09:50:26.283435 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerStarted","Data":"c74716ef804e2d84332b2c217c4e78dacce0653d9f0c63fd9aeb6ad91f9e76a9"} Jan 30 09:50:27 crc kubenswrapper[4758]: I0130 09:50:27.292889 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerStarted","Data":"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2"} Jan 30 09:50:27 crc kubenswrapper[4758]: I0130 09:50:27.295678 4758 generic.go:334] "Generic (PLEG): container finished" podID="41b18707-50a1-4f70-876c-4116e4970b68" containerID="5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21" exitCode=0 Jan 30 09:50:27 crc kubenswrapper[4758]: I0130 09:50:27.295709 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerDied","Data":"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21"} Jan 30 09:50:27 crc kubenswrapper[4758]: I0130 09:50:27.312413 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hkwlg" podStartSLOduration=2.730337418 podStartE2EDuration="5.312394519s" podCreationTimestamp="2026-01-30 09:50:22 +0000 UTC" firstStartedPulling="2026-01-30 09:50:24.262602647 +0000 UTC m=+4829.234914198" lastFinishedPulling="2026-01-30 09:50:26.844659748 +0000 UTC m=+4831.816971299" observedRunningTime="2026-01-30 09:50:27.310458889 +0000 UTC m=+4832.282770440" watchObservedRunningTime="2026-01-30 09:50:27.312394519 +0000 UTC m=+4832.284706070" Jan 30 09:50:28 crc kubenswrapper[4758]: I0130 09:50:28.322371 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerStarted","Data":"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e"} Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.265126 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.265714 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.350115 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.558338 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:33 crc kubenswrapper[4758]: I0130 09:50:33.631832 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:35 crc kubenswrapper[4758]: I0130 09:50:35.376768 4758 generic.go:334] "Generic (PLEG): container finished" podID="41b18707-50a1-4f70-876c-4116e4970b68" containerID="9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e" exitCode=0 Jan 30 09:50:35 crc kubenswrapper[4758]: I0130 09:50:35.376861 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerDied","Data":"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e"} Jan 30 09:50:35 crc kubenswrapper[4758]: I0130 09:50:35.377300 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hkwlg" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="registry-server" containerID="cri-o://51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" gracePeriod=2 Jan 30 09:50:35 crc kubenswrapper[4758]: I0130 09:50:35.964424 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.006694 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") pod \"bb4421b8-2d05-42aa-823d-95f4d4704c94\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.007067 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") pod \"bb4421b8-2d05-42aa-823d-95f4d4704c94\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.007140 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") pod \"bb4421b8-2d05-42aa-823d-95f4d4704c94\" (UID: \"bb4421b8-2d05-42aa-823d-95f4d4704c94\") " Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.012730 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities" (OuterVolumeSpecName: "utilities") pod "bb4421b8-2d05-42aa-823d-95f4d4704c94" (UID: "bb4421b8-2d05-42aa-823d-95f4d4704c94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.029233 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb4421b8-2d05-42aa-823d-95f4d4704c94" (UID: "bb4421b8-2d05-42aa-823d-95f4d4704c94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.035264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr" (OuterVolumeSpecName: "kube-api-access-r8brr") pod "bb4421b8-2d05-42aa-823d-95f4d4704c94" (UID: "bb4421b8-2d05-42aa-823d-95f4d4704c94"). InnerVolumeSpecName "kube-api-access-r8brr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.108820 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8brr\" (UniqueName: \"kubernetes.io/projected/bb4421b8-2d05-42aa-823d-95f4d4704c94-kube-api-access-r8brr\") on node \"crc\" DevicePath \"\"" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.108851 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.108861 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4421b8-2d05-42aa-823d-95f4d4704c94-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388511 4758 generic.go:334] "Generic (PLEG): container finished" podID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerID="51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" exitCode=0 Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388577 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkwlg" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388592 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerDied","Data":"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2"} Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388934 4758 scope.go:117] "RemoveContainer" containerID="51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.388640 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkwlg" event={"ID":"bb4421b8-2d05-42aa-823d-95f4d4704c94","Type":"ContainerDied","Data":"d3f3f343b9c85fddae57464a7d2d4977951dbba479525623971ffc2368fc7490"} Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.393220 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerStarted","Data":"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574"} Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.414562 4758 scope.go:117] "RemoveContainer" containerID="2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.420552 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tn9df" podStartSLOduration=2.871121947 podStartE2EDuration="11.420533221s" podCreationTimestamp="2026-01-30 09:50:25 +0000 UTC" firstStartedPulling="2026-01-30 09:50:27.298217753 +0000 UTC m=+4832.270529304" lastFinishedPulling="2026-01-30 09:50:35.847629027 +0000 UTC m=+4840.819940578" observedRunningTime="2026-01-30 09:50:36.418986832 +0000 UTC m=+4841.391298413" watchObservedRunningTime="2026-01-30 09:50:36.420533221 +0000 UTC m=+4841.392844772" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.450517 4758 scope.go:117] "RemoveContainer" containerID="35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.456528 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.467782 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkwlg"] Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.473376 4758 scope.go:117] "RemoveContainer" containerID="51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" Jan 30 09:50:36 crc kubenswrapper[4758]: E0130 09:50:36.473726 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2\": container with ID starting with 51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2 not found: ID does not exist" containerID="51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.473756 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2"} err="failed to get container status \"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2\": rpc error: code = NotFound desc = could not find container \"51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2\": container with ID starting with 51a52a7838705e9c3d44bcf689d9f730940405bfc52deee88b4981cd595c02f2 not found: ID does not exist" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.473776 4758 scope.go:117] "RemoveContainer" containerID="2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093" Jan 30 09:50:36 crc kubenswrapper[4758]: E0130 09:50:36.474105 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093\": container with ID starting with 2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093 not found: ID does not exist" containerID="2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.474137 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093"} err="failed to get container status \"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093\": rpc error: code = NotFound desc = could not find container \"2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093\": container with ID starting with 2355f227576ae3357e51400d87c4d1fde2982d7363bc91e7a53a7584cfe13093 not found: ID does not exist" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.474183 4758 scope.go:117] "RemoveContainer" containerID="35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307" Jan 30 09:50:36 crc kubenswrapper[4758]: E0130 09:50:36.474487 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307\": container with ID starting with 35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307 not found: ID does not exist" containerID="35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307" Jan 30 09:50:36 crc kubenswrapper[4758]: I0130 09:50:36.474516 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307"} err="failed to get container status \"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307\": rpc error: code = NotFound desc = could not find container \"35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307\": container with ID starting with 35d1e9467ea1ae003aec00f45268f78e173301a0d33980c92c93e2b4a394f307 not found: ID does not exist" Jan 30 09:50:37 crc kubenswrapper[4758]: I0130 09:50:37.780435 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" path="/var/lib/kubelet/pods/bb4421b8-2d05-42aa-823d-95f4d4704c94/volumes" Jan 30 09:50:45 crc kubenswrapper[4758]: I0130 09:50:45.616660 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:45 crc kubenswrapper[4758]: I0130 09:50:45.617275 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:50:46 crc kubenswrapper[4758]: I0130 09:50:46.658903 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tn9df" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" probeResult="failure" output=< Jan 30 09:50:46 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:50:46 crc kubenswrapper[4758]: > Jan 30 09:50:52 crc kubenswrapper[4758]: I0130 09:50:52.387818 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:50:52 crc kubenswrapper[4758]: I0130 09:50:52.388407 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:50:56 crc kubenswrapper[4758]: I0130 09:50:56.658012 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tn9df" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" probeResult="failure" output=< Jan 30 09:50:56 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:50:56 crc kubenswrapper[4758]: > Jan 30 09:51:06 crc kubenswrapper[4758]: I0130 09:51:06.663615 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tn9df" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" probeResult="failure" output=< Jan 30 09:51:06 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 09:51:06 crc kubenswrapper[4758]: > Jan 30 09:51:15 crc kubenswrapper[4758]: I0130 09:51:15.662761 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:51:15 crc kubenswrapper[4758]: I0130 09:51:15.712857 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:51:15 crc kubenswrapper[4758]: I0130 09:51:15.902768 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:51:16 crc kubenswrapper[4758]: I0130 09:51:16.858997 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tn9df" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" containerID="cri-o://80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" gracePeriod=2 Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.433964 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.618761 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") pod \"41b18707-50a1-4f70-876c-4116e4970b68\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.618903 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") pod \"41b18707-50a1-4f70-876c-4116e4970b68\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.618984 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") pod \"41b18707-50a1-4f70-876c-4116e4970b68\" (UID: \"41b18707-50a1-4f70-876c-4116e4970b68\") " Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.619971 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities" (OuterVolumeSpecName: "utilities") pod "41b18707-50a1-4f70-876c-4116e4970b68" (UID: "41b18707-50a1-4f70-876c-4116e4970b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.624860 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt" (OuterVolumeSpecName: "kube-api-access-st2dt") pod "41b18707-50a1-4f70-876c-4116e4970b68" (UID: "41b18707-50a1-4f70-876c-4116e4970b68"). InnerVolumeSpecName "kube-api-access-st2dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.720763 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.720794 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st2dt\" (UniqueName: \"kubernetes.io/projected/41b18707-50a1-4f70-876c-4116e4970b68-kube-api-access-st2dt\") on node \"crc\" DevicePath \"\"" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.750591 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41b18707-50a1-4f70-876c-4116e4970b68" (UID: "41b18707-50a1-4f70-876c-4116e4970b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.823313 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b18707-50a1-4f70-876c-4116e4970b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870663 4758 generic.go:334] "Generic (PLEG): container finished" podID="41b18707-50a1-4f70-876c-4116e4970b68" containerID="80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" exitCode=0 Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870701 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerDied","Data":"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574"} Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870727 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tn9df" event={"ID":"41b18707-50a1-4f70-876c-4116e4970b68","Type":"ContainerDied","Data":"c74716ef804e2d84332b2c217c4e78dacce0653d9f0c63fd9aeb6ad91f9e76a9"} Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870744 4758 scope.go:117] "RemoveContainer" containerID="80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.870870 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tn9df" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.898479 4758 scope.go:117] "RemoveContainer" containerID="9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.900385 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.919352 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tn9df"] Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.925545 4758 scope.go:117] "RemoveContainer" containerID="5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.963535 4758 scope.go:117] "RemoveContainer" containerID="80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" Jan 30 09:51:17 crc kubenswrapper[4758]: E0130 09:51:17.964210 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574\": container with ID starting with 80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574 not found: ID does not exist" containerID="80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.964479 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574"} err="failed to get container status \"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574\": rpc error: code = NotFound desc = could not find container \"80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574\": container with ID starting with 80b7f0d2171e74d3f7f5928e656e38d7ef1717ceb90e7ea87e677b3702745574 not found: ID does not exist" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.964555 4758 scope.go:117] "RemoveContainer" containerID="9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e" Jan 30 09:51:17 crc kubenswrapper[4758]: E0130 09:51:17.964856 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e\": container with ID starting with 9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e not found: ID does not exist" containerID="9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.964944 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e"} err="failed to get container status \"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e\": rpc error: code = NotFound desc = could not find container \"9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e\": container with ID starting with 9a281bbddd3d0c5082a728cf07d756c67da977adcb596fcf67889df0164dda1e not found: ID does not exist" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.965032 4758 scope.go:117] "RemoveContainer" containerID="5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21" Jan 30 09:51:17 crc kubenswrapper[4758]: E0130 09:51:17.965349 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21\": container with ID starting with 5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21 not found: ID does not exist" containerID="5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21" Jan 30 09:51:17 crc kubenswrapper[4758]: I0130 09:51:17.965481 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21"} err="failed to get container status \"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21\": rpc error: code = NotFound desc = could not find container \"5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21\": container with ID starting with 5fc13489b77c191a1e9e1e0d85d352cf32face2372e9b306fb089e26276d7e21 not found: ID does not exist" Jan 30 09:51:19 crc kubenswrapper[4758]: I0130 09:51:19.778977 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b18707-50a1-4f70-876c-4116e4970b68" path="/var/lib/kubelet/pods/41b18707-50a1-4f70-876c-4116e4970b68/volumes" Jan 30 09:51:22 crc kubenswrapper[4758]: I0130 09:51:22.387473 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:51:22 crc kubenswrapper[4758]: I0130 09:51:22.387823 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.386941 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.387649 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.387709 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.388749 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 09:51:52 crc kubenswrapper[4758]: I0130 09:51:52.388824 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" gracePeriod=600 Jan 30 09:51:52 crc kubenswrapper[4758]: E0130 09:51:52.523568 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:51:53 crc kubenswrapper[4758]: I0130 09:51:53.193209 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" exitCode=0 Jan 30 09:51:53 crc kubenswrapper[4758]: I0130 09:51:53.193539 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3"} Jan 30 09:51:53 crc kubenswrapper[4758]: I0130 09:51:53.193650 4758 scope.go:117] "RemoveContainer" containerID="b8ffbf2a6303e1bf0dae7b91841734487c9b9b731418440f9ba74df848313204" Jan 30 09:51:53 crc kubenswrapper[4758]: I0130 09:51:53.194468 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:51:53 crc kubenswrapper[4758]: E0130 09:51:53.194823 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:05 crc kubenswrapper[4758]: I0130 09:52:05.781695 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:05 crc kubenswrapper[4758]: E0130 09:52:05.782588 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.242475 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243270 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="extract-content" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243293 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="extract-content" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243312 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="extract-content" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243320 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="extract-content" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243338 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="extract-utilities" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243347 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="extract-utilities" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243367 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243375 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243393 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="extract-utilities" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243402 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="extract-utilities" Jan 30 09:52:06 crc kubenswrapper[4758]: E0130 09:52:06.243425 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243432 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243684 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4421b8-2d05-42aa-823d-95f4d4704c94" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.243701 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b18707-50a1-4f70-876c-4116e4970b68" containerName="registry-server" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.245374 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.254553 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.333085 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.333204 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.333326 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.435288 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.435343 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.435389 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.436062 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.436187 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.453781 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") pod \"certified-operators-vz4jt\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:06 crc kubenswrapper[4758]: I0130 09:52:06.576193 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:07 crc kubenswrapper[4758]: I0130 09:52:07.215435 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:07 crc kubenswrapper[4758]: I0130 09:52:07.318421 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerStarted","Data":"7ccd6f77a4ed3616550ba4614117ffe2ac0adf80c68b2c2a1e2779b5b9f9ec76"} Jan 30 09:52:08 crc kubenswrapper[4758]: I0130 09:52:08.329846 4758 generic.go:334] "Generic (PLEG): container finished" podID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerID="9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323" exitCode=0 Jan 30 09:52:08 crc kubenswrapper[4758]: I0130 09:52:08.330189 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerDied","Data":"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323"} Jan 30 09:52:10 crc kubenswrapper[4758]: I0130 09:52:10.347759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerStarted","Data":"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d"} Jan 30 09:52:11 crc kubenswrapper[4758]: I0130 09:52:11.359220 4758 generic.go:334] "Generic (PLEG): container finished" podID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerID="3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d" exitCode=0 Jan 30 09:52:11 crc kubenswrapper[4758]: I0130 09:52:11.359292 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerDied","Data":"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d"} Jan 30 09:52:12 crc kubenswrapper[4758]: I0130 09:52:12.371074 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerStarted","Data":"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9"} Jan 30 09:52:12 crc kubenswrapper[4758]: I0130 09:52:12.394619 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vz4jt" podStartSLOduration=2.884627468 podStartE2EDuration="6.394596664s" podCreationTimestamp="2026-01-30 09:52:06 +0000 UTC" firstStartedPulling="2026-01-30 09:52:08.33224567 +0000 UTC m=+4933.304557221" lastFinishedPulling="2026-01-30 09:52:11.842214876 +0000 UTC m=+4936.814526417" observedRunningTime="2026-01-30 09:52:12.393355306 +0000 UTC m=+4937.365666857" watchObservedRunningTime="2026-01-30 09:52:12.394596664 +0000 UTC m=+4937.366908215" Jan 30 09:52:16 crc kubenswrapper[4758]: I0130 09:52:16.576366 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:16 crc kubenswrapper[4758]: I0130 09:52:16.576900 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:16 crc kubenswrapper[4758]: I0130 09:52:16.623237 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:17 crc kubenswrapper[4758]: I0130 09:52:17.836744 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:17 crc kubenswrapper[4758]: I0130 09:52:17.888183 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:19 crc kubenswrapper[4758]: I0130 09:52:19.425223 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vz4jt" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="registry-server" containerID="cri-o://431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" gracePeriod=2 Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.008191 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.123004 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") pod \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.123237 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") pod \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.123440 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") pod \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\" (UID: \"23b0349e-90b6-461d-9d07-2e0e5264cfc3\") " Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.124789 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities" (OuterVolumeSpecName: "utilities") pod "23b0349e-90b6-461d-9d07-2e0e5264cfc3" (UID: "23b0349e-90b6-461d-9d07-2e0e5264cfc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.129997 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq" (OuterVolumeSpecName: "kube-api-access-8gzsq") pod "23b0349e-90b6-461d-9d07-2e0e5264cfc3" (UID: "23b0349e-90b6-461d-9d07-2e0e5264cfc3"). InnerVolumeSpecName "kube-api-access-8gzsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.172683 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23b0349e-90b6-461d-9d07-2e0e5264cfc3" (UID: "23b0349e-90b6-461d-9d07-2e0e5264cfc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.226091 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gzsq\" (UniqueName: \"kubernetes.io/projected/23b0349e-90b6-461d-9d07-2e0e5264cfc3-kube-api-access-8gzsq\") on node \"crc\" DevicePath \"\"" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.226123 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.226136 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b0349e-90b6-461d-9d07-2e0e5264cfc3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.436696 4758 generic.go:334] "Generic (PLEG): container finished" podID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerID="431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" exitCode=0 Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.436737 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerDied","Data":"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9"} Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.436764 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vz4jt" event={"ID":"23b0349e-90b6-461d-9d07-2e0e5264cfc3","Type":"ContainerDied","Data":"7ccd6f77a4ed3616550ba4614117ffe2ac0adf80c68b2c2a1e2779b5b9f9ec76"} Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.436783 4758 scope.go:117] "RemoveContainer" containerID="431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.437168 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vz4jt" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.468068 4758 scope.go:117] "RemoveContainer" containerID="3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.480424 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.489371 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vz4jt"] Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.521240 4758 scope.go:117] "RemoveContainer" containerID="9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.555831 4758 scope.go:117] "RemoveContainer" containerID="431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" Jan 30 09:52:20 crc kubenswrapper[4758]: E0130 09:52:20.556434 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9\": container with ID starting with 431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9 not found: ID does not exist" containerID="431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.556487 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9"} err="failed to get container status \"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9\": rpc error: code = NotFound desc = could not find container \"431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9\": container with ID starting with 431fb79878514db06f75b04df4c6c384c7f9ea4727110f288448d9f3321848b9 not found: ID does not exist" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.556524 4758 scope.go:117] "RemoveContainer" containerID="3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d" Jan 30 09:52:20 crc kubenswrapper[4758]: E0130 09:52:20.556998 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d\": container with ID starting with 3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d not found: ID does not exist" containerID="3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.557058 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d"} err="failed to get container status \"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d\": rpc error: code = NotFound desc = could not find container \"3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d\": container with ID starting with 3355179f5c01dd972c749c24283de96a43484083807e865daba9f8abd8057a8d not found: ID does not exist" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.557086 4758 scope.go:117] "RemoveContainer" containerID="9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323" Jan 30 09:52:20 crc kubenswrapper[4758]: E0130 09:52:20.558606 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323\": container with ID starting with 9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323 not found: ID does not exist" containerID="9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.558651 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323"} err="failed to get container status \"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323\": rpc error: code = NotFound desc = could not find container \"9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323\": container with ID starting with 9938cabdfbfbc81a849f7e12c189a7471847c422d3b3dc12c1d277f2b7084323 not found: ID does not exist" Jan 30 09:52:20 crc kubenswrapper[4758]: I0130 09:52:20.769624 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:20 crc kubenswrapper[4758]: E0130 09:52:20.770317 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:21 crc kubenswrapper[4758]: I0130 09:52:21.779723 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" path="/var/lib/kubelet/pods/23b0349e-90b6-461d-9d07-2e0e5264cfc3/volumes" Jan 30 09:52:33 crc kubenswrapper[4758]: I0130 09:52:33.769439 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:33 crc kubenswrapper[4758]: E0130 09:52:33.770748 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:44 crc kubenswrapper[4758]: I0130 09:52:44.768439 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:44 crc kubenswrapper[4758]: E0130 09:52:44.769197 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:52:59 crc kubenswrapper[4758]: I0130 09:52:59.768713 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:52:59 crc kubenswrapper[4758]: E0130 09:52:59.769701 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:53:12 crc kubenswrapper[4758]: I0130 09:53:12.768957 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:53:12 crc kubenswrapper[4758]: E0130 09:53:12.771534 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:53:24 crc kubenswrapper[4758]: I0130 09:53:24.769159 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:53:24 crc kubenswrapper[4758]: E0130 09:53:24.769940 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:53:39 crc kubenswrapper[4758]: I0130 09:53:39.768955 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:53:39 crc kubenswrapper[4758]: E0130 09:53:39.770148 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:53:50 crc kubenswrapper[4758]: I0130 09:53:50.768264 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:53:50 crc kubenswrapper[4758]: E0130 09:53:50.769817 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:05 crc kubenswrapper[4758]: I0130 09:54:05.775510 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:05 crc kubenswrapper[4758]: E0130 09:54:05.776314 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:17 crc kubenswrapper[4758]: I0130 09:54:17.770147 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:17 crc kubenswrapper[4758]: E0130 09:54:17.771115 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:28 crc kubenswrapper[4758]: I0130 09:54:28.769013 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:28 crc kubenswrapper[4758]: E0130 09:54:28.769638 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:39 crc kubenswrapper[4758]: I0130 09:54:39.769054 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:39 crc kubenswrapper[4758]: E0130 09:54:39.769805 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:54:53 crc kubenswrapper[4758]: I0130 09:54:53.769241 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:54:53 crc kubenswrapper[4758]: E0130 09:54:53.770186 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:04 crc kubenswrapper[4758]: I0130 09:55:04.768449 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:04 crc kubenswrapper[4758]: E0130 09:55:04.769282 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:15 crc kubenswrapper[4758]: I0130 09:55:15.776616 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:15 crc kubenswrapper[4758]: E0130 09:55:15.777355 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:30 crc kubenswrapper[4758]: I0130 09:55:30.769054 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:30 crc kubenswrapper[4758]: E0130 09:55:30.769732 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:43 crc kubenswrapper[4758]: I0130 09:55:43.769934 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:43 crc kubenswrapper[4758]: E0130 09:55:43.770658 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:55:54 crc kubenswrapper[4758]: I0130 09:55:54.769361 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:55:54 crc kubenswrapper[4758]: E0130 09:55:54.770481 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:06 crc kubenswrapper[4758]: I0130 09:56:06.770155 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:06 crc kubenswrapper[4758]: E0130 09:56:06.771121 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.571656 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:11 crc kubenswrapper[4758]: E0130 09:56:11.572275 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="extract-utilities" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.572288 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="extract-utilities" Jan 30 09:56:11 crc kubenswrapper[4758]: E0130 09:56:11.572298 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="registry-server" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.572304 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="registry-server" Jan 30 09:56:11 crc kubenswrapper[4758]: E0130 09:56:11.572319 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="extract-content" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.572338 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="extract-content" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.572527 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b0349e-90b6-461d-9d07-2e0e5264cfc3" containerName="registry-server" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.576798 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.586700 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.643397 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.643495 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.643527 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745094 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745154 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745255 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745695 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.745784 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.766233 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") pod \"community-operators-nsqjp\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:11 crc kubenswrapper[4758]: I0130 09:56:11.913208 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:12 crc kubenswrapper[4758]: I0130 09:56:12.568850 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:13 crc kubenswrapper[4758]: I0130 09:56:13.475031 4758 generic.go:334] "Generic (PLEG): container finished" podID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerID="96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486" exitCode=0 Jan 30 09:56:13 crc kubenswrapper[4758]: I0130 09:56:13.475140 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerDied","Data":"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486"} Jan 30 09:56:13 crc kubenswrapper[4758]: I0130 09:56:13.475348 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerStarted","Data":"31dbd8a0d2181073c437e0a25b9faf07401b3676b670f95d7a1d2cc581545505"} Jan 30 09:56:13 crc kubenswrapper[4758]: I0130 09:56:13.477902 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 09:56:14 crc kubenswrapper[4758]: I0130 09:56:14.484303 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerStarted","Data":"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade"} Jan 30 09:56:16 crc kubenswrapper[4758]: I0130 09:56:16.506386 4758 generic.go:334] "Generic (PLEG): container finished" podID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerID="9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade" exitCode=0 Jan 30 09:56:16 crc kubenswrapper[4758]: I0130 09:56:16.506529 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerDied","Data":"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade"} Jan 30 09:56:17 crc kubenswrapper[4758]: I0130 09:56:17.517528 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerStarted","Data":"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8"} Jan 30 09:56:17 crc kubenswrapper[4758]: I0130 09:56:17.539644 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nsqjp" podStartSLOduration=3.140921249 podStartE2EDuration="6.539621775s" podCreationTimestamp="2026-01-30 09:56:11 +0000 UTC" firstStartedPulling="2026-01-30 09:56:13.477263054 +0000 UTC m=+5178.449574595" lastFinishedPulling="2026-01-30 09:56:16.87596358 +0000 UTC m=+5181.848275121" observedRunningTime="2026-01-30 09:56:17.53248699 +0000 UTC m=+5182.504798571" watchObservedRunningTime="2026-01-30 09:56:17.539621775 +0000 UTC m=+5182.511933346" Jan 30 09:56:17 crc kubenswrapper[4758]: I0130 09:56:17.769130 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:17 crc kubenswrapper[4758]: E0130 09:56:17.769463 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:21 crc kubenswrapper[4758]: I0130 09:56:21.914225 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:21 crc kubenswrapper[4758]: I0130 09:56:21.914828 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:21 crc kubenswrapper[4758]: I0130 09:56:21.976515 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:22 crc kubenswrapper[4758]: I0130 09:56:22.610500 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:22 crc kubenswrapper[4758]: I0130 09:56:22.715624 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:24 crc kubenswrapper[4758]: I0130 09:56:24.582985 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nsqjp" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="registry-server" containerID="cri-o://50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" gracePeriod=2 Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.155654 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.191208 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") pod \"eae43c60-4f82-4c9f-86cb-c453f5633105\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.191279 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") pod \"eae43c60-4f82-4c9f-86cb-c453f5633105\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.191357 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") pod \"eae43c60-4f82-4c9f-86cb-c453f5633105\" (UID: \"eae43c60-4f82-4c9f-86cb-c453f5633105\") " Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.199014 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities" (OuterVolumeSpecName: "utilities") pod "eae43c60-4f82-4c9f-86cb-c453f5633105" (UID: "eae43c60-4f82-4c9f-86cb-c453f5633105"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.217726 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4" (OuterVolumeSpecName: "kube-api-access-gtqx4") pod "eae43c60-4f82-4c9f-86cb-c453f5633105" (UID: "eae43c60-4f82-4c9f-86cb-c453f5633105"). InnerVolumeSpecName "kube-api-access-gtqx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.246945 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eae43c60-4f82-4c9f-86cb-c453f5633105" (UID: "eae43c60-4f82-4c9f-86cb-c453f5633105"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.293578 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtqx4\" (UniqueName: \"kubernetes.io/projected/eae43c60-4f82-4c9f-86cb-c453f5633105-kube-api-access-gtqx4\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.293630 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.293646 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae43c60-4f82-4c9f-86cb-c453f5633105-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598264 4758 generic.go:334] "Generic (PLEG): container finished" podID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerID="50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" exitCode=0 Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598314 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerDied","Data":"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8"} Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598346 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsqjp" event={"ID":"eae43c60-4f82-4c9f-86cb-c453f5633105","Type":"ContainerDied","Data":"31dbd8a0d2181073c437e0a25b9faf07401b3676b670f95d7a1d2cc581545505"} Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598360 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsqjp" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.598366 4758 scope.go:117] "RemoveContainer" containerID="50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.641077 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.656798 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nsqjp"] Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.659238 4758 scope.go:117] "RemoveContainer" containerID="9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.692860 4758 scope.go:117] "RemoveContainer" containerID="96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.745397 4758 scope.go:117] "RemoveContainer" containerID="50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" Jan 30 09:56:25 crc kubenswrapper[4758]: E0130 09:56:25.745896 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8\": container with ID starting with 50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8 not found: ID does not exist" containerID="50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.745940 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8"} err="failed to get container status \"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8\": rpc error: code = NotFound desc = could not find container \"50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8\": container with ID starting with 50fa6b85bff8a2a86b68290e5466af2771ca96853c0210b62963eb2a44d79fa8 not found: ID does not exist" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.745970 4758 scope.go:117] "RemoveContainer" containerID="9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade" Jan 30 09:56:25 crc kubenswrapper[4758]: E0130 09:56:25.746645 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade\": container with ID starting with 9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade not found: ID does not exist" containerID="9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.746674 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade"} err="failed to get container status \"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade\": rpc error: code = NotFound desc = could not find container \"9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade\": container with ID starting with 9d81cd38e37ada49eb6dd3020b3535e61f77c545a1cf35cff648056313555ade not found: ID does not exist" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.746694 4758 scope.go:117] "RemoveContainer" containerID="96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486" Jan 30 09:56:25 crc kubenswrapper[4758]: E0130 09:56:25.746950 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486\": container with ID starting with 96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486 not found: ID does not exist" containerID="96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.746978 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486"} err="failed to get container status \"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486\": rpc error: code = NotFound desc = could not find container \"96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486\": container with ID starting with 96bf4ebf5c0a80e355322bc42513fea5c605fe60239cd6b17ce4774b48709486 not found: ID does not exist" Jan 30 09:56:25 crc kubenswrapper[4758]: I0130 09:56:25.778327 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" path="/var/lib/kubelet/pods/eae43c60-4f82-4c9f-86cb-c453f5633105/volumes" Jan 30 09:56:30 crc kubenswrapper[4758]: I0130 09:56:30.770099 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:30 crc kubenswrapper[4758]: E0130 09:56:30.770715 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:44 crc kubenswrapper[4758]: I0130 09:56:44.769079 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:44 crc kubenswrapper[4758]: E0130 09:56:44.769808 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 09:56:45 crc kubenswrapper[4758]: I0130 09:56:45.767827 4758 generic.go:334] "Generic (PLEG): container finished" podID="110e1168-332c-4165-bd6e-47419c571681" containerID="9585d72b864ad0afa161d8615e2e581e1849cfc14f137d53455eed18ce1d77db" exitCode=1 Jan 30 09:56:45 crc kubenswrapper[4758]: I0130 09:56:45.777851 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"110e1168-332c-4165-bd6e-47419c571681","Type":"ContainerDied","Data":"9585d72b864ad0afa161d8615e2e581e1849cfc14f137d53455eed18ce1d77db"} Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.188521 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261395 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261520 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261614 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261671 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261713 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261761 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261840 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.261960 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.262074 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") pod \"110e1168-332c-4165-bd6e-47419c571681\" (UID: \"110e1168-332c-4165-bd6e-47419c571681\") " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.262289 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.262757 4758 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.268211 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data" (OuterVolumeSpecName: "config-data") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.268370 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.271316 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.271325 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh" (OuterVolumeSpecName: "kube-api-access-kdxsh") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "kube-api-access-kdxsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.297168 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.303485 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.303550 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.320865 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "110e1168-332c-4165-bd6e-47419c571681" (UID: "110e1168-332c-4165-bd6e-47419c571681"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.365478 4758 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.365813 4758 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.365925 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.366016 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/110e1168-332c-4165-bd6e-47419c571681-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.366114 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdxsh\" (UniqueName: \"kubernetes.io/projected/110e1168-332c-4165-bd6e-47419c571681-kube-api-access-kdxsh\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.366184 4758 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/110e1168-332c-4165-bd6e-47419c571681-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.367982 4758 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.368090 4758 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/110e1168-332c-4165-bd6e-47419c571681-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.389611 4758 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.473332 4758 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.785474 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"110e1168-332c-4165-bd6e-47419c571681","Type":"ContainerDied","Data":"333f254addae44025e00aa879c29d6e6ca58b2430bd9e1d7e1eba46b32166b13"} Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.785522 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333f254addae44025e00aa879c29d6e6ca58b2430bd9e1d7e1eba46b32166b13" Jan 30 09:56:47 crc kubenswrapper[4758]: I0130 09:56:47.785526 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.768824 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.803173 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:56:58 crc kubenswrapper[4758]: E0130 09:56:58.804835 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110e1168-332c-4165-bd6e-47419c571681" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.804856 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="110e1168-332c-4165-bd6e-47419c571681" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:56:58 crc kubenswrapper[4758]: E0130 09:56:58.804881 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="registry-server" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.804888 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="registry-server" Jan 30 09:56:58 crc kubenswrapper[4758]: E0130 09:56:58.804914 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="extract-content" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.804921 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="extract-content" Jan 30 09:56:58 crc kubenswrapper[4758]: E0130 09:56:58.805184 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="extract-utilities" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.805199 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="extract-utilities" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.805378 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="110e1168-332c-4165-bd6e-47419c571681" containerName="tempest-tests-tempest-tests-runner" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.805413 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae43c60-4f82-4c9f-86cb-c453f5633105" containerName="registry-server" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.806780 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.813512 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bnbxz" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.826153 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.890836 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.891205 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwm6w\" (UniqueName: \"kubernetes.io/projected/842839ab-b48f-429c-8823-152c1606dda7-kube-api-access-lwm6w\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.992602 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwm6w\" (UniqueName: \"kubernetes.io/projected/842839ab-b48f-429c-8823-152c1606dda7-kube-api-access-lwm6w\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.992688 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:58 crc kubenswrapper[4758]: I0130 09:56:58.993780 4758 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.014656 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwm6w\" (UniqueName: \"kubernetes.io/projected/842839ab-b48f-429c-8823-152c1606dda7-kube-api-access-lwm6w\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.017927 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"842839ab-b48f-429c-8823-152c1606dda7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.208558 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.651030 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.924770 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8"} Jan 30 09:56:59 crc kubenswrapper[4758]: I0130 09:56:59.937072 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"842839ab-b48f-429c-8823-152c1606dda7","Type":"ContainerStarted","Data":"5d6cef65fe9cf1d80cd89f46f02a9e78be13b5a40cd2cb971e6d71e8015cda92"} Jan 30 09:57:00 crc kubenswrapper[4758]: I0130 09:57:00.947167 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"842839ab-b48f-429c-8823-152c1606dda7","Type":"ContainerStarted","Data":"5ed48f7837850948a1adefcf515a838afafc43d0169672c9d21985c6383146f5"} Jan 30 09:57:00 crc kubenswrapper[4758]: I0130 09:57:00.968477 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.963328133 podStartE2EDuration="2.968460069s" podCreationTimestamp="2026-01-30 09:56:58 +0000 UTC" firstStartedPulling="2026-01-30 09:56:59.659662322 +0000 UTC m=+5224.631973873" lastFinishedPulling="2026-01-30 09:57:00.664794258 +0000 UTC m=+5225.637105809" observedRunningTime="2026-01-30 09:57:00.961361457 +0000 UTC m=+5225.933673008" watchObservedRunningTime="2026-01-30 09:57:00.968460069 +0000 UTC m=+5225.940771620" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.178825 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.182681 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.185182 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lpqrm"/"default-dockercfg-5nkfs" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.185199 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lpqrm"/"openshift-service-ca.crt" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.193489 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lpqrm"/"kube-root-ca.crt" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.204025 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.275450 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.275759 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.377345 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.377407 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.377838 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.396953 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") pod \"must-gather-qpckg\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.500453 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 09:57:29 crc kubenswrapper[4758]: I0130 09:57:29.983218 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 09:57:30 crc kubenswrapper[4758]: I0130 09:57:30.186797 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/must-gather-qpckg" event={"ID":"bec5515f-517d-441a-8d27-381128c9cbe3","Type":"ContainerStarted","Data":"89ae98253414ceb39dcf218749a0f366f9233ed8dc3795138c34830143d35d76"} Jan 30 09:57:38 crc kubenswrapper[4758]: I0130 09:57:38.292758 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/must-gather-qpckg" event={"ID":"bec5515f-517d-441a-8d27-381128c9cbe3","Type":"ContainerStarted","Data":"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec"} Jan 30 09:57:38 crc kubenswrapper[4758]: I0130 09:57:38.293590 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/must-gather-qpckg" event={"ID":"bec5515f-517d-441a-8d27-381128c9cbe3","Type":"ContainerStarted","Data":"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee"} Jan 30 09:57:38 crc kubenswrapper[4758]: I0130 09:57:38.313510 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lpqrm/must-gather-qpckg" podStartSLOduration=1.5552651979999998 podStartE2EDuration="9.31348545s" podCreationTimestamp="2026-01-30 09:57:29 +0000 UTC" firstStartedPulling="2026-01-30 09:57:29.996479491 +0000 UTC m=+5254.968791042" lastFinishedPulling="2026-01-30 09:57:37.754699743 +0000 UTC m=+5262.727011294" observedRunningTime="2026-01-30 09:57:38.309329659 +0000 UTC m=+5263.281641240" watchObservedRunningTime="2026-01-30 09:57:38.31348545 +0000 UTC m=+5263.285797021" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.635923 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-8g2cp"] Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.638578 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.663323 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.663597 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.765458 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.765926 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.766142 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.826085 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") pod \"crc-debug-8g2cp\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:43 crc kubenswrapper[4758]: I0130 09:57:43.960687 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:57:44 crc kubenswrapper[4758]: I0130 09:57:44.352147 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" event={"ID":"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e","Type":"ContainerStarted","Data":"a0af2912b7e8110758c70c798dcab6453fd6a32667c5d9a104395c99842f600d"} Jan 30 09:57:55 crc kubenswrapper[4758]: I0130 09:57:55.446541 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" event={"ID":"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e","Type":"ContainerStarted","Data":"5ec49f03ce30f39c620b596bdc2dedc28bf6a7cfab662959f3b4adf09c9a537d"} Jan 30 09:57:55 crc kubenswrapper[4758]: I0130 09:57:55.475562 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" podStartSLOduration=2.102101914 podStartE2EDuration="12.475542999s" podCreationTimestamp="2026-01-30 09:57:43 +0000 UTC" firstStartedPulling="2026-01-30 09:57:44.011917713 +0000 UTC m=+5268.984229264" lastFinishedPulling="2026-01-30 09:57:54.385358798 +0000 UTC m=+5279.357670349" observedRunningTime="2026-01-30 09:57:55.472610098 +0000 UTC m=+5280.444921649" watchObservedRunningTime="2026-01-30 09:57:55.475542999 +0000 UTC m=+5280.447854550" Jan 30 09:58:50 crc kubenswrapper[4758]: I0130 09:58:50.910659 4758 generic.go:334] "Generic (PLEG): container finished" podID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" containerID="5ec49f03ce30f39c620b596bdc2dedc28bf6a7cfab662959f3b4adf09c9a537d" exitCode=0 Jan 30 09:58:50 crc kubenswrapper[4758]: I0130 09:58:50.910748 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" event={"ID":"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e","Type":"ContainerDied","Data":"5ec49f03ce30f39c620b596bdc2dedc28bf6a7cfab662959f3b4adf09c9a537d"} Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.011573 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.050857 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-8g2cp"] Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.059528 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-8g2cp"] Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.136548 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") pod \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.136624 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") pod \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\" (UID: \"41c6b24c-ec14-4379-9101-cd5ea6d2ab2e\") " Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.137006 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host" (OuterVolumeSpecName: "host") pod "41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" (UID: "41c6b24c-ec14-4379-9101-cd5ea6d2ab2e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.137369 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.152669 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv" (OuterVolumeSpecName: "kube-api-access-dz7dv") pod "41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" (UID: "41c6b24c-ec14-4379-9101-cd5ea6d2ab2e"). InnerVolumeSpecName "kube-api-access-dz7dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.239086 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz7dv\" (UniqueName: \"kubernetes.io/projected/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e-kube-api-access-dz7dv\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.941153 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0af2912b7e8110758c70c798dcab6453fd6a32667c5d9a104395c99842f600d" Jan 30 09:58:52 crc kubenswrapper[4758]: I0130 09:58:52.941472 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-8g2cp" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.296698 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-jdz8q"] Jan 30 09:58:53 crc kubenswrapper[4758]: E0130 09:58:53.297082 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" containerName="container-00" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.297096 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" containerName="container-00" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.297305 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" containerName="container-00" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.297924 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.479598 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.479708 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.581589 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.581687 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.581925 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.619600 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") pod \"crc-debug-jdz8q\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.625324 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.781117 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c6b24c-ec14-4379-9101-cd5ea6d2ab2e" path="/var/lib/kubelet/pods/41c6b24c-ec14-4379-9101-cd5ea6d2ab2e/volumes" Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.952165 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" event={"ID":"a579bed6-89ad-42fe-9633-2f0c7f34d9de","Type":"ContainerStarted","Data":"d5124596fab9753fd484ec6efd38bf02b9c0c9ce543d442c2f18b786111542c3"} Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.952222 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" event={"ID":"a579bed6-89ad-42fe-9633-2f0c7f34d9de","Type":"ContainerStarted","Data":"e056a91092edf01dbe7658a48842b4f58bccb927ace5f17a092ad55a2ff57214"} Jan 30 09:58:53 crc kubenswrapper[4758]: I0130 09:58:53.979330 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" podStartSLOduration=0.979311103 podStartE2EDuration="979.311103ms" podCreationTimestamp="2026-01-30 09:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 09:58:53.969375911 +0000 UTC m=+5338.941687482" watchObservedRunningTime="2026-01-30 09:58:53.979311103 +0000 UTC m=+5338.951622654" Jan 30 09:58:54 crc kubenswrapper[4758]: I0130 09:58:54.960762 4758 generic.go:334] "Generic (PLEG): container finished" podID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" containerID="d5124596fab9753fd484ec6efd38bf02b9c0c9ce543d442c2f18b786111542c3" exitCode=0 Jan 30 09:58:54 crc kubenswrapper[4758]: I0130 09:58:54.960798 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" event={"ID":"a579bed6-89ad-42fe-9633-2f0c7f34d9de","Type":"ContainerDied","Data":"d5124596fab9753fd484ec6efd38bf02b9c0c9ce543d442c2f18b786111542c3"} Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.072050 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.102815 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-jdz8q"] Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.111673 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-jdz8q"] Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.237340 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") pod \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.237486 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") pod \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\" (UID: \"a579bed6-89ad-42fe-9633-2f0c7f34d9de\") " Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.237955 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host" (OuterVolumeSpecName: "host") pod "a579bed6-89ad-42fe-9633-2f0c7f34d9de" (UID: "a579bed6-89ad-42fe-9633-2f0c7f34d9de"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.243517 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f" (OuterVolumeSpecName: "kube-api-access-cxk2f") pod "a579bed6-89ad-42fe-9633-2f0c7f34d9de" (UID: "a579bed6-89ad-42fe-9633-2f0c7f34d9de"). InnerVolumeSpecName "kube-api-access-cxk2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.339827 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxk2f\" (UniqueName: \"kubernetes.io/projected/a579bed6-89ad-42fe-9633-2f0c7f34d9de-kube-api-access-cxk2f\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.340135 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a579bed6-89ad-42fe-9633-2f0c7f34d9de-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.981114 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e056a91092edf01dbe7658a48842b4f58bccb927ace5f17a092ad55a2ff57214" Jan 30 09:58:56 crc kubenswrapper[4758]: I0130 09:58:56.981184 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-jdz8q" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.319329 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-kp2t6"] Jan 30 09:58:57 crc kubenswrapper[4758]: E0130 09:58:57.319782 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" containerName="container-00" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.319799 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" containerName="container-00" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.320026 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" containerName="container-00" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.320791 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.467303 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.467696 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.569542 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.570133 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.570274 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.599221 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") pod \"crc-debug-kp2t6\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.636662 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:57 crc kubenswrapper[4758]: W0130 09:58:57.673226 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f9e7233_29a7_4dbe_a96c_d3bf9abd7077.slice/crio-31984a9dbbcee34344fd17ae59d8e1e21a072bb43e418b4d7aa5e42bc7329326 WatchSource:0}: Error finding container 31984a9dbbcee34344fd17ae59d8e1e21a072bb43e418b4d7aa5e42bc7329326: Status 404 returned error can't find the container with id 31984a9dbbcee34344fd17ae59d8e1e21a072bb43e418b4d7aa5e42bc7329326 Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.783604 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a579bed6-89ad-42fe-9633-2f0c7f34d9de" path="/var/lib/kubelet/pods/a579bed6-89ad-42fe-9633-2f0c7f34d9de/volumes" Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.991403 4758 generic.go:334] "Generic (PLEG): container finished" podID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" containerID="cb2be7757629008fc0336fea0781bdd9ab81c5796a2894a5f33c24f797829704" exitCode=0 Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.991746 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" event={"ID":"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077","Type":"ContainerDied","Data":"cb2be7757629008fc0336fea0781bdd9ab81c5796a2894a5f33c24f797829704"} Jan 30 09:58:57 crc kubenswrapper[4758]: I0130 09:58:57.991777 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" event={"ID":"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077","Type":"ContainerStarted","Data":"31984a9dbbcee34344fd17ae59d8e1e21a072bb43e418b4d7aa5e42bc7329326"} Jan 30 09:58:58 crc kubenswrapper[4758]: I0130 09:58:58.252196 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-kp2t6"] Jan 30 09:58:58 crc kubenswrapper[4758]: I0130 09:58:58.262736 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpqrm/crc-debug-kp2t6"] Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.108977 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.300528 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") pod \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.300939 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") pod \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\" (UID: \"5f9e7233-29a7-4dbe-a96c-d3bf9abd7077\") " Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.300712 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host" (OuterVolumeSpecName: "host") pod "5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" (UID: "5f9e7233-29a7-4dbe-a96c-d3bf9abd7077"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.301833 4758 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-host\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.308552 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf" (OuterVolumeSpecName: "kube-api-access-llcpf") pod "5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" (UID: "5f9e7233-29a7-4dbe-a96c-d3bf9abd7077"). InnerVolumeSpecName "kube-api-access-llcpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.403513 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llcpf\" (UniqueName: \"kubernetes.io/projected/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077-kube-api-access-llcpf\") on node \"crc\" DevicePath \"\"" Jan 30 09:58:59 crc kubenswrapper[4758]: I0130 09:58:59.777248 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" path="/var/lib/kubelet/pods/5f9e7233-29a7-4dbe-a96c-d3bf9abd7077/volumes" Jan 30 09:59:00 crc kubenswrapper[4758]: I0130 09:59:00.010103 4758 scope.go:117] "RemoveContainer" containerID="cb2be7757629008fc0336fea0781bdd9ab81c5796a2894a5f33c24f797829704" Jan 30 09:59:00 crc kubenswrapper[4758]: I0130 09:59:00.010265 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/crc-debug-kp2t6" Jan 30 09:59:22 crc kubenswrapper[4758]: I0130 09:59:22.387561 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:59:22 crc kubenswrapper[4758]: I0130 09:59:22.388143 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 09:59:26 crc kubenswrapper[4758]: I0130 09:59:26.423519 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-856f46cdd-mkt57_e33c3e33-3106-483e-bdba-400a2911ff27/barbican-api/0.log" Jan 30 09:59:26 crc kubenswrapper[4758]: I0130 09:59:26.625490 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-856f46cdd-mkt57_e33c3e33-3106-483e-bdba-400a2911ff27/barbican-api-log/0.log" Jan 30 09:59:26 crc kubenswrapper[4758]: I0130 09:59:26.706674 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b64f54b54-68xdf_dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f/barbican-keystone-listener/0.log" Jan 30 09:59:26 crc kubenswrapper[4758]: I0130 09:59:26.910931 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b64f54b54-68xdf_dd0f801d-8dd7-4f95-9f23-d3eaf8ed825f/barbican-keystone-listener-log/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.014184 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9c5c6c655-zfgmj_db351db3-71c9-4b03-98b9-68da68f45f14/barbican-worker/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.080785 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9c5c6c655-zfgmj_db351db3-71c9-4b03-98b9-68da68f45f14/barbican-worker-log/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.280354 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-zvcfp_64e0d966-7ff9-4dd8-97c0-660cde10793b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.518958 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dc56c5be-c70d-44a7-8914-cf2e598f3333/ceilometer-notification-agent/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.538578 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dc56c5be-c70d-44a7-8914-cf2e598f3333/ceilometer-central-agent/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.645890 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dc56c5be-c70d-44a7-8914-cf2e598f3333/proxy-httpd/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.702341 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dc56c5be-c70d-44a7-8914-cf2e598f3333/sg-core/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.885339 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d6d3f5e9-e330-476b-be63-775114f987e6/cinder-api/0.log" Jan 30 09:59:27 crc kubenswrapper[4758]: I0130 09:59:27.949335 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d6d3f5e9-e330-476b-be63-775114f987e6/cinder-api-log/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.419109 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fbd50144-fe99-468f-a32b-172996d95ca1/cinder-scheduler/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.576773 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fbd50144-fe99-468f-a32b-172996d95ca1/probe/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.620667 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hzvfn_a282c0aa-8c3d-4a78-9fd6-1971701a1158/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.857289 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-95b4x_b5727310-9e0b-40f5-ae4e-209ed7d3ee36/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:28 crc kubenswrapper[4758]: I0130 09:59:28.932727 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-549cc57c95-cpkk5_cdf54085-1dd0-4eb1-9640-e75c69be5a44/init/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.184122 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-549cc57c95-cpkk5_cdf54085-1dd0-4eb1-9640-e75c69be5a44/init/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.323006 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-t6b6w_103d82ed-724d-4545-9b1c-04633d68c1ef/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.371934 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-549cc57c95-cpkk5_cdf54085-1dd0-4eb1-9640-e75c69be5a44/dnsmasq-dns/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.573628 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9e6a95ad-6f31-4494-9caf-5eea1c43e005/glance-log/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.584587 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9e6a95ad-6f31-4494-9caf-5eea1c43e005/glance-httpd/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.810131 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa932ed-7bd7-4827-a24f-e29c15c9b563/glance-log/0.log" Jan 30 09:59:29 crc kubenswrapper[4758]: I0130 09:59:29.917406 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa932ed-7bd7-4827-a24f-e29c15c9b563/glance-httpd/0.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.078360 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cf698bb7b-gp87v_97906db2-3b2d-44ec-af77-d3edf75b7f76/horizon/2.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.264719 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cf698bb7b-gp87v_97906db2-3b2d-44ec-af77-d3edf75b7f76/horizon/1.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.507687 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4rccq_836593cc-2b98-4f54-8407-6d92687559f5/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.673864 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cf698bb7b-gp87v_97906db2-3b2d-44ec-af77-d3edf75b7f76/horizon-log/0.log" Jan 30 09:59:30 crc kubenswrapper[4758]: I0130 09:59:30.707675 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rtws8_39adf6a6-10cd-412d-aca4-3c68ddcf8887/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:31 crc kubenswrapper[4758]: I0130 09:59:31.101805 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496061-prvzt_c5ff7966-87c8-4b9b-8520-a05c3b5d252d/keystone-cron/0.log" Jan 30 09:59:31 crc kubenswrapper[4758]: I0130 09:59:31.327351 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_402b7d3e-d66f-412a-a3a8-4c45a9a47628/kube-state-metrics/0.log" Jan 30 09:59:31 crc kubenswrapper[4758]: I0130 09:59:31.484867 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-546cd7df57-wnwgz_283998cc-90b9-49fb-91f5-7cfd514603d0/keystone-api/0.log" Jan 30 09:59:31 crc kubenswrapper[4758]: I0130 09:59:31.826808 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-7m7rv_35efb0cc-1bf4-4052-af18-b206ea052f80/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:32 crc kubenswrapper[4758]: I0130 09:59:32.530723 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-frbkr_e377cd96-b016-4154-bae7-fa61f9be7472/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:32 crc kubenswrapper[4758]: I0130 09:59:32.691545 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f9c8c6ff5-f2sb7_0bacb926-f58c-4c06-870a-633b7a3795c5/neutron-httpd/0.log" Jan 30 09:59:32 crc kubenswrapper[4758]: I0130 09:59:32.973106 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f9c8c6ff5-f2sb7_0bacb926-f58c-4c06-870a-633b7a3795c5/neutron-api/0.log" Jan 30 09:59:33 crc kubenswrapper[4758]: I0130 09:59:33.637967 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d5c8d4f1-2007-458e-a918-35eea3933622/nova-cell0-conductor-conductor/0.log" Jan 30 09:59:33 crc kubenswrapper[4758]: I0130 09:59:33.830221 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_9de33ead-37a1-4675-bad7-79b672a0c954/nova-cell1-conductor-conductor/0.log" Jan 30 09:59:34 crc kubenswrapper[4758]: I0130 09:59:34.306612 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6c7b05a5-3faf-4e02-9bb5-f79a4745f073/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 09:59:34 crc kubenswrapper[4758]: I0130 09:59:34.732649 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-vfn7z_48c9d5d6-6dc5-4848-bade-3c302106b074/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:35 crc kubenswrapper[4758]: I0130 09:59:35.233298 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd2d2fe7-5dac-4f3b-80f6-650712925495/nova-api-log/0.log" Jan 30 09:59:35 crc kubenswrapper[4758]: I0130 09:59:35.378632 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e4213c10-dde9-4a4d-9af9-304dd08f755c/nova-metadata-log/0.log" Jan 30 09:59:35 crc kubenswrapper[4758]: I0130 09:59:35.727504 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd2d2fe7-5dac-4f3b-80f6-650712925495/nova-api-api/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.015932 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a15517a-ff48-40d1-91b4-442bfef91fc1/mysql-bootstrap/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.129481 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4697d282-598e-4faa-ae13-6ba6d3747bf0/nova-scheduler-scheduler/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.265291 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a15517a-ff48-40d1-91b4-442bfef91fc1/galera/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.350979 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a15517a-ff48-40d1-91b4-442bfef91fc1/mysql-bootstrap/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.698908 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1787d8b1-5b19-41e5-a66d-8375f9d5bb3f/mysql-bootstrap/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.841212 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1787d8b1-5b19-41e5-a66d-8375f9d5bb3f/mysql-bootstrap/0.log" Jan 30 09:59:36 crc kubenswrapper[4758]: I0130 09:59:36.947868 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1787d8b1-5b19-41e5-a66d-8375f9d5bb3f/galera/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.145365 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e81b8de8-1714-4a5d-852a-e61d4bc9cd5d/openstackclient/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.344514 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bg2b8_78294966-2fbd-4ed5-8d2a-2096ac07dac1/ovn-controller/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.496735 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-w9zqv_1c89386d-e6bb-45b3-bd95-970270275127/openstack-network-exporter/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.563771 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e4213c10-dde9-4a4d-9af9-304dd08f755c/nova-metadata-metadata/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.812888 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jzfn_86f64629-c944-4783-8012-7cea45690009/ovsdb-server-init/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.965287 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jzfn_86f64629-c944-4783-8012-7cea45690009/ovsdb-server-init/0.log" Jan 30 09:59:37 crc kubenswrapper[4758]: I0130 09:59:37.970613 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jzfn_86f64629-c944-4783-8012-7cea45690009/ovs-vswitchd/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.092477 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jzfn_86f64629-c944-4783-8012-7cea45690009/ovsdb-server/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.273220 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fv46l_9db7f310-f803-4981-8a55-5d45e9015488/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.344364 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_831717ab-1273-408c-9fdf-4cd5bd2d2bb9/openstack-network-exporter/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.466912 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_831717ab-1273-408c-9fdf-4cd5bd2d2bb9/ovn-northd/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.615912 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a7ba2509-bffc-4639-9b6f-188e2a194b7a/openstack-network-exporter/0.log" Jan 30 09:59:38 crc kubenswrapper[4758]: I0130 09:59:38.672739 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a7ba2509-bffc-4639-9b6f-188e2a194b7a/ovsdbserver-nb/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.244702 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0817ec2c-1c6d-4c1b-a019-3f2579ade18a/openstack-network-exporter/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.372231 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0817ec2c-1c6d-4c1b-a019-3f2579ade18a/ovsdbserver-sb/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.525077 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9896abbd-2b46-4ad8-99ce-6cf9c5ebb65d/memcached/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.763135 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-75649bd464-bvxps_ca89048c-91af-4732-8ef8-24da4618ccf9/placement-api/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.769314 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec0e1aed-0ac5-4482-906f-89c9243729ea/setup-container/0.log" Jan 30 09:59:39 crc kubenswrapper[4758]: I0130 09:59:39.839307 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-75649bd464-bvxps_ca89048c-91af-4732-8ef8-24da4618ccf9/placement-log/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.001877 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec0e1aed-0ac5-4482-906f-89c9243729ea/setup-container/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.048293 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c72311ae-5d7e-4978-a690-a9bee0b3672b/setup-container/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.101666 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ec0e1aed-0ac5-4482-906f-89c9243729ea/rabbitmq/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.303626 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-d2zhn_49ef08a8-e5d8-4a62-bc4d-227948c7fa12/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.371598 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c72311ae-5d7e-4978-a690-a9bee0b3672b/setup-container/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.396310 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c72311ae-5d7e-4978-a690-a9bee0b3672b/rabbitmq/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.801447 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rr84b_a7d607da-5923-4ef5-82ba-083df5db3864/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.802374 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zgv9c_350797d0-7758-4c0d-84cb-ec160451f377/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:40 crc kubenswrapper[4758]: I0130 09:59:40.805867 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5nm8t_2b0522c4-4143-4ac3-b5c7-ea6c073dfc38/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.132738 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wlb6p_1135df14-5d2f-47ff-9038-fce2addc71d0/ssh-known-hosts-edpm-deployment/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.256234 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75f5775999-fhl5h_c2358e5c-db98-4b7b-8b6c-2e83132655a9/proxy-server/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.347585 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-75f5775999-fhl5h_c2358e5c-db98-4b7b-8b6c-2e83132655a9/proxy-httpd/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.460234 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/account-auditor/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.476330 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ws6z7_0a36303a-ea35-4a12-be39-906481ea247a/swift-ring-rebalance/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.558391 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/account-reaper/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.649348 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/account-replicator/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.731547 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/container-auditor/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.748731 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/account-server/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.795421 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/container-replicator/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.857477 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/container-server/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.934195 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/container-updater/0.log" Jan 30 09:59:41 crc kubenswrapper[4758]: I0130 09:59:41.971558 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-expirer/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.040496 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-auditor/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.080751 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-replicator/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.172525 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-updater/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.181437 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/object-server/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.289117 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/swift-recon-cron/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.301405 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f978baf9-b7c0-4d25-8bca-e95a018ba2af/rsync/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.541202 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2ptpn_38932896-a566-4440-b672-33909cb638b0/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.611635 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_110e1168-332c-4165-bd6e-47419c571681/tempest-tests-tempest-tests-runner/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.740874 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_842839ab-b48f-429c-8823-152c1606dda7/test-operator-logs-container/0.log" Jan 30 09:59:42 crc kubenswrapper[4758]: I0130 09:59:42.847427 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-55r6r_21db3b55-b11b-4ca5-a2d0-676ec4e6fb83/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 09:59:52 crc kubenswrapper[4758]: I0130 09:59:52.387472 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 09:59:52 crc kubenswrapper[4758]: I0130 09:59:52.387904 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.145411 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc"] Jan 30 10:00:00 crc kubenswrapper[4758]: E0130 10:00:00.146312 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" containerName="container-00" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.146324 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" containerName="container-00" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.146493 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9e7233-29a7-4dbe-a96c-d3bf9abd7077" containerName="container-00" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.147115 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.150471 4758 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.150765 4758 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.169051 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc"] Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.179374 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.179448 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.179567 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.280915 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.280995 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.281054 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.282271 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.290802 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.299408 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") pod \"collect-profiles-29496120-czxhc\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.464549 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:00 crc kubenswrapper[4758]: I0130 10:00:00.958455 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc"] Jan 30 10:00:01 crc kubenswrapper[4758]: I0130 10:00:01.590472 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" event={"ID":"b6e6a092-7ac9-466c-9d70-f324f2908447","Type":"ContainerStarted","Data":"da2f39fd9f27fcc1481cbfd2a939f7a131858726c2f9923b0278f5da04987da1"} Jan 30 10:00:01 crc kubenswrapper[4758]: I0130 10:00:01.590811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" event={"ID":"b6e6a092-7ac9-466c-9d70-f324f2908447","Type":"ContainerStarted","Data":"819a63cf33834f09384ea6d3cc1d1229b348ccabb1c99d2a3f50d83a15015a2a"} Jan 30 10:00:01 crc kubenswrapper[4758]: I0130 10:00:01.618537 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" podStartSLOduration=1.6185187189999999 podStartE2EDuration="1.618518719s" podCreationTimestamp="2026-01-30 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:00:01.610372122 +0000 UTC m=+5406.582683663" watchObservedRunningTime="2026-01-30 10:00:01.618518719 +0000 UTC m=+5406.590830260" Jan 30 10:00:02 crc kubenswrapper[4758]: I0130 10:00:02.599338 4758 generic.go:334] "Generic (PLEG): container finished" podID="b6e6a092-7ac9-466c-9d70-f324f2908447" containerID="da2f39fd9f27fcc1481cbfd2a939f7a131858726c2f9923b0278f5da04987da1" exitCode=0 Jan 30 10:00:02 crc kubenswrapper[4758]: I0130 10:00:02.599377 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" event={"ID":"b6e6a092-7ac9-466c-9d70-f324f2908447","Type":"ContainerDied","Data":"da2f39fd9f27fcc1481cbfd2a939f7a131858726c2f9923b0278f5da04987da1"} Jan 30 10:00:03 crc kubenswrapper[4758]: I0130 10:00:03.961974 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.047160 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") pod \"b6e6a092-7ac9-466c-9d70-f324f2908447\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.047506 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") pod \"b6e6a092-7ac9-466c-9d70-f324f2908447\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.047704 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") pod \"b6e6a092-7ac9-466c-9d70-f324f2908447\" (UID: \"b6e6a092-7ac9-466c-9d70-f324f2908447\") " Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.048492 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume" (OuterVolumeSpecName: "config-volume") pod "b6e6a092-7ac9-466c-9d70-f324f2908447" (UID: "b6e6a092-7ac9-466c-9d70-f324f2908447"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.056227 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b6e6a092-7ac9-466c-9d70-f324f2908447" (UID: "b6e6a092-7ac9-466c-9d70-f324f2908447"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.065815 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x" (OuterVolumeSpecName: "kube-api-access-gkb5x") pod "b6e6a092-7ac9-466c-9d70-f324f2908447" (UID: "b6e6a092-7ac9-466c-9d70-f324f2908447"). InnerVolumeSpecName "kube-api-access-gkb5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.149273 4758 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6e6a092-7ac9-466c-9d70-f324f2908447-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.149302 4758 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6e6a092-7ac9-466c-9d70-f324f2908447-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.149313 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkb5x\" (UniqueName: \"kubernetes.io/projected/b6e6a092-7ac9-466c-9d70-f324f2908447-kube-api-access-gkb5x\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.616367 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" event={"ID":"b6e6a092-7ac9-466c-9d70-f324f2908447","Type":"ContainerDied","Data":"819a63cf33834f09384ea6d3cc1d1229b348ccabb1c99d2a3f50d83a15015a2a"} Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.616408 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="819a63cf33834f09384ea6d3cc1d1229b348ccabb1c99d2a3f50d83a15015a2a" Jan 30 10:00:04 crc kubenswrapper[4758]: I0130 10:00:04.616421 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-czxhc" Jan 30 10:00:05 crc kubenswrapper[4758]: I0130 10:00:05.034200 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 10:00:05 crc kubenswrapper[4758]: I0130 10:00:05.042709 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496075-xcd84"] Jan 30 10:00:05 crc kubenswrapper[4758]: I0130 10:00:05.781383 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c0d7e9-81d5-4f36-bc9e-56e22d853f85" path="/var/lib/kubelet/pods/31c0d7e9-81d5-4f36-bc9e-56e22d853f85/volumes" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.410902 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/util/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.668905 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/util/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.681309 4758 scope.go:117] "RemoveContainer" containerID="32d29d08b6aebab1b513110ea8db75577887cfa1b8e5f32a8aaa6014efa7ad2d" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.708235 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/pull/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.732688 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/pull/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.934063 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/pull/0.log" Jan 30 10:00:11 crc kubenswrapper[4758]: I0130 10:00:11.959133 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/extract/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.008606 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_43c3659829473fcf2b488f52ac7fb3758bf03b7ba556f7829b06bad624p5h56_51f6834f-53ed-44f6-ba73-fc7275fcb395/util/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.279559 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5f9bbdc844-6cgsq_62b3bb0d-894a-4cb1-b644-d42f3cba98d7/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.302688 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-566c8844c5-6nj4p_1c4d1258-0416-49d0-a3a5-6ece70dc0c46/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.434987 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-cp9km_dc189df6-25bc-4d6e-aa30-05ce0db12721/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.570267 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784f59d4f4-sw42x_fe68673c-8979-46ee-a4aa-f95bcd7b4e8a/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.673283 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54985f5875-jct4c_9579fc9d-6eae-4249-ac43-35144ed58bed/manager/0.log" Jan 30 10:00:12 crc kubenswrapper[4758]: I0130 10:00:12.779429 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-vjdn9_b2a4d0cd-ddb6-43d6-8f3e-457f519fb8c2/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.145801 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-5k6df_ef31968c-db2e-4083-a08f-19a8daf0ac2d/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.151595 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6fd9bbb6f6-4jmpb_5c2a7d2b-62a1-468b-a3b3-fe77698a41a2/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.348055 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c9d56f9bd-h89d9_b9ddc7b3-8eb5-44fe-8c3b-4dd3a03e43b7/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.399463 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-74954f9f78-kxmjn_ac7c91ce-d4d9-4754-9828-a43140218228/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.876207 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-c5d9k_0565da5c-02e0-409f-b801-e06c3e79ef47/manager/0.log" Jan 30 10:00:13 crc kubenswrapper[4758]: I0130 10:00:13.990431 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6cfc4f6754-7s6n2_b6d614d3-1ced-4b27-bd91-8edd410e5fc5/manager/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.206268 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-67f5956bc9-hp2mv_0b006f91-5b27-4342-935b-c7a7f174c03b/manager/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.251704 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-694c6dcf95-kdnzv_a104527b-98dc-4120-91b5-6e7e9466b9a3/manager/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.416400 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d6xzhv_8cb0c6cc-e254-4dae-b433-397504fba6dc/manager/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.600529 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-744b85dfd5-tlqzj_10f9b3c9-c691-403e-801f-420bc2701a95/operator/0.log" Jan 30 10:00:14 crc kubenswrapper[4758]: I0130 10:00:14.880311 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zh8ld_43724cfc-11f7-4ded-9561-4bde1020015f/registry-server/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.209948 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-qlc8l_03b807f9-cd53-4189-b7df-09c5ea5fdf53/manager/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.419072 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-vzcpf_178c1ff9-1a2a-4c4a-8258-89c267a5d0aa/manager/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.610787 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gt4n2_fbb26261-18aa-4ba0-940e-788200175600/operator/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.897006 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-59cb5bfcb7-xtndr_0c09af13-f67b-4306-9039-f02d5f9e2f53/manager/0.log" Jan 30 10:00:15 crc kubenswrapper[4758]: I0130 10:00:15.922207 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d4f9d9c9b-t7dht_e4787454-8070-449e-a7d0-2ff179eaaff3/manager/0.log" Jan 30 10:00:16 crc kubenswrapper[4758]: I0130 10:00:16.225960 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pmvv4_fe039ec9-aaec-4e17-8eac-c7719245ba4d/manager/0.log" Jan 30 10:00:16 crc kubenswrapper[4758]: I0130 10:00:16.239307 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cd99594-xhszj_b271df00-e9f2-4c58-94e7-22ea4b7d7eaf/manager/0.log" Jan 30 10:00:16 crc kubenswrapper[4758]: I0130 10:00:16.437079 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5bf648c946-ngzmc_73790ffa-61b1-489c-94c9-3934af94185f/manager/0.log" Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.387834 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.388403 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.388447 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.389160 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.389222 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8" gracePeriod=600 Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.787832 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8" exitCode=0 Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.787928 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8"} Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.788107 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916"} Jan 30 10:00:22 crc kubenswrapper[4758]: I0130 10:00:22.788126 4758 scope.go:117] "RemoveContainer" containerID="a109dd0e314341612aaed2cae6be49860e641c30389ecf55ef118eff2325fdf3" Jan 30 10:00:35 crc kubenswrapper[4758]: I0130 10:00:35.920223 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hd9qf_3934d9aa-0054-4ed3-a2e8-a57dc60dad77/control-plane-machine-set-operator/0.log" Jan 30 10:00:36 crc kubenswrapper[4758]: I0130 10:00:36.114881 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q592c_8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972/machine-api-operator/0.log" Jan 30 10:00:36 crc kubenswrapper[4758]: I0130 10:00:36.124482 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q592c_8cca440a-6ad3-4a98-9ca4-a7f1bd2f2972/kube-rbac-proxy/0.log" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.926187 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:41 crc kubenswrapper[4758]: E0130 10:00:41.927275 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e6a092-7ac9-466c-9d70-f324f2908447" containerName="collect-profiles" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.927292 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e6a092-7ac9-466c-9d70-f324f2908447" containerName="collect-profiles" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.927541 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e6a092-7ac9-466c-9d70-f324f2908447" containerName="collect-profiles" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.929282 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.950940 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.984331 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.984384 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:41 crc kubenswrapper[4758]: I0130 10:00:41.984542 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.086197 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.086308 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.086336 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.086842 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.087123 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.110707 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") pod \"redhat-marketplace-4xszq\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.249948 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.755194 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.927470 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.929298 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.944279 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.967639 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerStarted","Data":"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a"} Jan 30 10:00:42 crc kubenswrapper[4758]: I0130 10:00:42.967697 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerStarted","Data":"fc255179fddadcb8aafebdb63864e466d1c15cf8f2e2eebf0300197b548eb2a8"} Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.008005 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.008078 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.008282 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.109761 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.109836 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.110053 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.110918 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.111202 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.137951 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") pod \"redhat-operators-jptgp\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.251716 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.580322 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:00:43 crc kubenswrapper[4758]: W0130 10:00:43.585155 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74ec10d5_be61_406f_949a_38c5457d385a.slice/crio-9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e WatchSource:0}: Error finding container 9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e: Status 404 returned error can't find the container with id 9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.976773 4758 generic.go:334] "Generic (PLEG): container finished" podID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerID="074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a" exitCode=0 Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.976839 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerDied","Data":"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a"} Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.980168 4758 generic.go:334] "Generic (PLEG): container finished" podID="74ec10d5-be61-406f-949a-38c5457d385a" containerID="e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b" exitCode=0 Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.980202 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerDied","Data":"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b"} Jan 30 10:00:43 crc kubenswrapper[4758]: I0130 10:00:43.980226 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerStarted","Data":"9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e"} Jan 30 10:00:44 crc kubenswrapper[4758]: I0130 10:00:44.989346 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerStarted","Data":"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363"} Jan 30 10:00:44 crc kubenswrapper[4758]: I0130 10:00:44.990956 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerStarted","Data":"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a"} Jan 30 10:00:46 crc kubenswrapper[4758]: I0130 10:00:46.001100 4758 generic.go:334] "Generic (PLEG): container finished" podID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerID="a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a" exitCode=0 Jan 30 10:00:46 crc kubenswrapper[4758]: I0130 10:00:46.001189 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerDied","Data":"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a"} Jan 30 10:00:47 crc kubenswrapper[4758]: I0130 10:00:47.011847 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerStarted","Data":"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3"} Jan 30 10:00:50 crc kubenswrapper[4758]: I0130 10:00:50.036372 4758 generic.go:334] "Generic (PLEG): container finished" podID="74ec10d5-be61-406f-949a-38c5457d385a" containerID="39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363" exitCode=0 Jan 30 10:00:50 crc kubenswrapper[4758]: I0130 10:00:50.036439 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerDied","Data":"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363"} Jan 30 10:00:50 crc kubenswrapper[4758]: I0130 10:00:50.070071 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xszq" podStartSLOduration=6.387796912 podStartE2EDuration="9.070024241s" podCreationTimestamp="2026-01-30 10:00:41 +0000 UTC" firstStartedPulling="2026-01-30 10:00:43.979084861 +0000 UTC m=+5448.951396412" lastFinishedPulling="2026-01-30 10:00:46.66131219 +0000 UTC m=+5451.633623741" observedRunningTime="2026-01-30 10:00:47.038797443 +0000 UTC m=+5452.011109004" watchObservedRunningTime="2026-01-30 10:00:50.070024241 +0000 UTC m=+5455.042335792" Jan 30 10:00:50 crc kubenswrapper[4758]: I0130 10:00:50.927873 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gzhzw_3ef1df8d-c3ae-4dc4-96e2-3fc73aade7bb/cert-manager-controller/0.log" Jan 30 10:00:51 crc kubenswrapper[4758]: I0130 10:00:51.047811 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerStarted","Data":"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2"} Jan 30 10:00:51 crc kubenswrapper[4758]: I0130 10:00:51.095614 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jptgp" podStartSLOduration=2.333008473 podStartE2EDuration="9.095590979s" podCreationTimestamp="2026-01-30 10:00:42 +0000 UTC" firstStartedPulling="2026-01-30 10:00:43.981867478 +0000 UTC m=+5448.954179029" lastFinishedPulling="2026-01-30 10:00:50.744449984 +0000 UTC m=+5455.716761535" observedRunningTime="2026-01-30 10:00:51.075383794 +0000 UTC m=+5456.047695405" watchObservedRunningTime="2026-01-30 10:00:51.095590979 +0000 UTC m=+5456.067902540" Jan 30 10:00:51 crc kubenswrapper[4758]: I0130 10:00:51.157939 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-fpg8k_738fc587-0a87-41b5-b2b0-690fa92d754e/cert-manager-cainjector/0.log" Jan 30 10:00:51 crc kubenswrapper[4758]: I0130 10:00:51.344396 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-xrm7r_b55a68e5-f198-4525-9d07-2acdcb906d36/cert-manager-webhook/0.log" Jan 30 10:00:52 crc kubenswrapper[4758]: I0130 10:00:52.251486 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:52 crc kubenswrapper[4758]: I0130 10:00:52.251754 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:52 crc kubenswrapper[4758]: I0130 10:00:52.307008 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:53 crc kubenswrapper[4758]: I0130 10:00:53.109633 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:53 crc kubenswrapper[4758]: I0130 10:00:53.253002 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:53 crc kubenswrapper[4758]: I0130 10:00:53.253053 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:00:53 crc kubenswrapper[4758]: I0130 10:00:53.720655 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:54 crc kubenswrapper[4758]: I0130 10:00:54.299535 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:00:54 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 10:00:54 crc kubenswrapper[4758]: > Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.080897 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xszq" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="registry-server" containerID="cri-o://ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" gracePeriod=2 Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.564510 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.659741 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") pod \"e37ad05e-bde5-49eb-81da-eb24781ce575\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.659892 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") pod \"e37ad05e-bde5-49eb-81da-eb24781ce575\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.660024 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") pod \"e37ad05e-bde5-49eb-81da-eb24781ce575\" (UID: \"e37ad05e-bde5-49eb-81da-eb24781ce575\") " Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.660430 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities" (OuterVolumeSpecName: "utilities") pod "e37ad05e-bde5-49eb-81da-eb24781ce575" (UID: "e37ad05e-bde5-49eb-81da-eb24781ce575"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.660701 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.672314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm" (OuterVolumeSpecName: "kube-api-access-5qkwm") pod "e37ad05e-bde5-49eb-81da-eb24781ce575" (UID: "e37ad05e-bde5-49eb-81da-eb24781ce575"). InnerVolumeSpecName "kube-api-access-5qkwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.687527 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e37ad05e-bde5-49eb-81da-eb24781ce575" (UID: "e37ad05e-bde5-49eb-81da-eb24781ce575"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.764198 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e37ad05e-bde5-49eb-81da-eb24781ce575-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:55 crc kubenswrapper[4758]: I0130 10:00:55.764250 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qkwm\" (UniqueName: \"kubernetes.io/projected/e37ad05e-bde5-49eb-81da-eb24781ce575-kube-api-access-5qkwm\") on node \"crc\" DevicePath \"\"" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089680 4758 generic.go:334] "Generic (PLEG): container finished" podID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerID="ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" exitCode=0 Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089734 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerDied","Data":"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3"} Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089755 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xszq" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089770 4758 scope.go:117] "RemoveContainer" containerID="ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.089759 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xszq" event={"ID":"e37ad05e-bde5-49eb-81da-eb24781ce575","Type":"ContainerDied","Data":"fc255179fddadcb8aafebdb63864e466d1c15cf8f2e2eebf0300197b548eb2a8"} Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.114540 4758 scope.go:117] "RemoveContainer" containerID="a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.120288 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.134784 4758 scope.go:117] "RemoveContainer" containerID="074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.147866 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xszq"] Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.177338 4758 scope.go:117] "RemoveContainer" containerID="ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" Jan 30 10:00:56 crc kubenswrapper[4758]: E0130 10:00:56.177806 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3\": container with ID starting with ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3 not found: ID does not exist" containerID="ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.177914 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3"} err="failed to get container status \"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3\": rpc error: code = NotFound desc = could not find container \"ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3\": container with ID starting with ad457bc6dc4b37072aea247dc07deeda4dc56019030d7a396914d62fa71a15b3 not found: ID does not exist" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.177988 4758 scope.go:117] "RemoveContainer" containerID="a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a" Jan 30 10:00:56 crc kubenswrapper[4758]: E0130 10:00:56.178477 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a\": container with ID starting with a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a not found: ID does not exist" containerID="a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.178587 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a"} err="failed to get container status \"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a\": rpc error: code = NotFound desc = could not find container \"a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a\": container with ID starting with a62bf3c936af1ef1546d4d07b58ffd163b7a1a4dc4b531be67d1edc3a381101a not found: ID does not exist" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.178660 4758 scope.go:117] "RemoveContainer" containerID="074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a" Jan 30 10:00:56 crc kubenswrapper[4758]: E0130 10:00:56.178964 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a\": container with ID starting with 074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a not found: ID does not exist" containerID="074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a" Jan 30 10:00:56 crc kubenswrapper[4758]: I0130 10:00:56.178987 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a"} err="failed to get container status \"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a\": rpc error: code = NotFound desc = could not find container \"074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a\": container with ID starting with 074e757911c02a3d84faf212f880bda64eb029445a8cbb90d2c5f87bb4d4ca6a not found: ID does not exist" Jan 30 10:00:57 crc kubenswrapper[4758]: I0130 10:00:57.781542 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" path="/var/lib/kubelet/pods/e37ad05e-bde5-49eb-81da-eb24781ce575/volumes" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.154469 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496121-22z2n"] Jan 30 10:01:00 crc kubenswrapper[4758]: E0130 10:01:00.155192 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="extract-content" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.155210 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="extract-content" Jan 30 10:01:00 crc kubenswrapper[4758]: E0130 10:01:00.155235 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="registry-server" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.155244 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="registry-server" Jan 30 10:01:00 crc kubenswrapper[4758]: E0130 10:01:00.155279 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="extract-utilities" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.155287 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="extract-utilities" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.155480 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37ad05e-bde5-49eb-81da-eb24781ce575" containerName="registry-server" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.156177 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.166637 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496121-22z2n"] Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.341027 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.341133 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.341168 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.341206 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.443404 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.443842 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.443993 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.444173 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.451282 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.453424 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.453764 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.464323 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") pod \"keystone-cron-29496121-22z2n\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.473941 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:00 crc kubenswrapper[4758]: I0130 10:01:00.949671 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496121-22z2n"] Jan 30 10:01:01 crc kubenswrapper[4758]: I0130 10:01:01.133068 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496121-22z2n" event={"ID":"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28","Type":"ContainerStarted","Data":"131f208ad9c0f5ca08e81a316be162ec993578076d588ef58a15412be61bdbbc"} Jan 30 10:01:02 crc kubenswrapper[4758]: I0130 10:01:02.144273 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496121-22z2n" event={"ID":"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28","Type":"ContainerStarted","Data":"8f7ba1e3736ab74314c8e0b4485ee4087f0f9c9c1db720ef32172059aed12689"} Jan 30 10:01:02 crc kubenswrapper[4758]: I0130 10:01:02.160961 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496121-22z2n" podStartSLOduration=2.160945838 podStartE2EDuration="2.160945838s" podCreationTimestamp="2026-01-30 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:01:02.157357785 +0000 UTC m=+5467.129669336" watchObservedRunningTime="2026-01-30 10:01:02.160945838 +0000 UTC m=+5467.133257389" Jan 30 10:01:04 crc kubenswrapper[4758]: I0130 10:01:04.298284 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:01:04 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 10:01:04 crc kubenswrapper[4758]: > Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.185700 4758 generic.go:334] "Generic (PLEG): container finished" podID="25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" containerID="8f7ba1e3736ab74314c8e0b4485ee4087f0f9c9c1db720ef32172059aed12689" exitCode=0 Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.185793 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496121-22z2n" event={"ID":"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28","Type":"ContainerDied","Data":"8f7ba1e3736ab74314c8e0b4485ee4087f0f9c9c1db720ef32172059aed12689"} Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.559162 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5w4jj_edf1a46e-1ddb-45c6-b545-911d0f651ee9/nmstate-console-plugin/0.log" Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.811984 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jjswr_619a7108-d329-4b73-84eb-4258a2bfe118/nmstate-handler/0.log" Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.814869 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9d5tr_767b8ab5-0385-4ee5-a65c-20d58550812e/kube-rbac-proxy/0.log" Jan 30 10:01:05 crc kubenswrapper[4758]: I0130 10:01:05.886337 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9d5tr_767b8ab5-0385-4ee5-a65c-20d58550812e/nmstate-metrics/0.log" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.112515 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-r9vhp_1d70f37f-b0fc-48e9-ba5d-50c0e6187fa1/nmstate-operator/0.log" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.159116 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tp5p2_b3b445ff-f326-4f7a-9f01-557ea0ac488e/nmstate-webhook/0.log" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.598365 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.767840 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") pod \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.767915 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") pod \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.768004 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") pod \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.768109 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") pod \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\" (UID: \"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28\") " Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.780278 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" (UID: "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.780545 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc" (OuterVolumeSpecName: "kube-api-access-8q5rc") pod "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" (UID: "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28"). InnerVolumeSpecName "kube-api-access-8q5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.831610 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data" (OuterVolumeSpecName: "config-data") pod "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" (UID: "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.840186 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" (UID: "25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.870544 4758 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.870578 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q5rc\" (UniqueName: \"kubernetes.io/projected/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-kube-api-access-8q5rc\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.870588 4758 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:06 crc kubenswrapper[4758]: I0130 10:01:06.870597 4758 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:07 crc kubenswrapper[4758]: I0130 10:01:07.203959 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496121-22z2n" event={"ID":"25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28","Type":"ContainerDied","Data":"131f208ad9c0f5ca08e81a316be162ec993578076d588ef58a15412be61bdbbc"} Jan 30 10:01:07 crc kubenswrapper[4758]: I0130 10:01:07.204319 4758 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="131f208ad9c0f5ca08e81a316be162ec993578076d588ef58a15412be61bdbbc" Jan 30 10:01:07 crc kubenswrapper[4758]: I0130 10:01:07.203998 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496121-22z2n" Jan 30 10:01:14 crc kubenswrapper[4758]: I0130 10:01:14.300488 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:01:14 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 10:01:14 crc kubenswrapper[4758]: > Jan 30 10:01:24 crc kubenswrapper[4758]: I0130 10:01:24.296398 4758 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:01:24 crc kubenswrapper[4758]: timeout: failed to connect service ":50051" within 1s Jan 30 10:01:24 crc kubenswrapper[4758]: > Jan 30 10:01:33 crc kubenswrapper[4758]: I0130 10:01:33.296390 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:01:33 crc kubenswrapper[4758]: I0130 10:01:33.349477 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:01:33 crc kubenswrapper[4758]: I0130 10:01:33.529579 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.063513 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-28gk8_30993dee-7712-48e7-a156-86293a84ea40/kube-rbac-proxy/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.094755 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-28gk8_30993dee-7712-48e7-a156-86293a84ea40/controller/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.309162 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-frr-files/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.514542 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jptgp" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" containerID="cri-o://f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" gracePeriod=2 Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.546024 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-reloader/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.548355 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-metrics/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.600387 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-frr-files/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.641463 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-reloader/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.825790 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-reloader/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.887135 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-metrics/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.913925 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-frr-files/0.log" Jan 30 10:01:34 crc kubenswrapper[4758]: I0130 10:01:34.959977 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-metrics/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.112117 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.185222 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-frr-files/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.212181 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-reloader/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.240614 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/cp-metrics/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.290082 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") pod \"74ec10d5-be61-406f-949a-38c5457d385a\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.290365 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") pod \"74ec10d5-be61-406f-949a-38c5457d385a\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.290406 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") pod \"74ec10d5-be61-406f-949a-38c5457d385a\" (UID: \"74ec10d5-be61-406f-949a-38c5457d385a\") " Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.290750 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities" (OuterVolumeSpecName: "utilities") pod "74ec10d5-be61-406f-949a-38c5457d385a" (UID: "74ec10d5-be61-406f-949a-38c5457d385a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.304329 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk" (OuterVolumeSpecName: "kube-api-access-4wphk") pod "74ec10d5-be61-406f-949a-38c5457d385a" (UID: "74ec10d5-be61-406f-949a-38c5457d385a"). InnerVolumeSpecName "kube-api-access-4wphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.309995 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/controller/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.392344 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wphk\" (UniqueName: \"kubernetes.io/projected/74ec10d5-be61-406f-949a-38c5457d385a-kube-api-access-4wphk\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.393118 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.398895 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74ec10d5-be61-406f-949a-38c5457d385a" (UID: "74ec10d5-be61-406f-949a-38c5457d385a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.457414 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/frr-metrics/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.494424 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74ec10d5-be61-406f-949a-38c5457d385a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528194 4758 generic.go:334] "Generic (PLEG): container finished" podID="74ec10d5-be61-406f-949a-38c5457d385a" containerID="f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" exitCode=0 Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528297 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerDied","Data":"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2"} Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528364 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jptgp" event={"ID":"74ec10d5-be61-406f-949a-38c5457d385a","Type":"ContainerDied","Data":"9e4c477e9aad1517b63bd45ac2d6106f123978a3ead5088e088726be68eb500e"} Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528429 4758 scope.go:117] "RemoveContainer" containerID="f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.528679 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jptgp" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.560125 4758 scope.go:117] "RemoveContainer" containerID="39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.582093 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/kube-rbac-proxy/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.615666 4758 scope.go:117] "RemoveContainer" containerID="e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.627134 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.657083 4758 scope.go:117] "RemoveContainer" containerID="f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" Jan 30 10:01:35 crc kubenswrapper[4758]: E0130 10:01:35.659944 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2\": container with ID starting with f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2 not found: ID does not exist" containerID="f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.659990 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2"} err="failed to get container status \"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2\": rpc error: code = NotFound desc = could not find container \"f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2\": container with ID starting with f632cfc37c3ca5c20ddb630d23f8ca4c888cf20ad7176283fdd66bf8fabe03b2 not found: ID does not exist" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.660065 4758 scope.go:117] "RemoveContainer" containerID="39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363" Jan 30 10:01:35 crc kubenswrapper[4758]: E0130 10:01:35.664159 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363\": container with ID starting with 39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363 not found: ID does not exist" containerID="39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.664193 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363"} err="failed to get container status \"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363\": rpc error: code = NotFound desc = could not find container \"39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363\": container with ID starting with 39d26c9a073a951245b4a3c79a1b3710c17b4ecf6cd2c131f45100b1b6c6c363 not found: ID does not exist" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.664216 4758 scope.go:117] "RemoveContainer" containerID="e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b" Jan 30 10:01:35 crc kubenswrapper[4758]: E0130 10:01:35.668240 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b\": container with ID starting with e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b not found: ID does not exist" containerID="e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.668287 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b"} err="failed to get container status \"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b\": rpc error: code = NotFound desc = could not find container \"e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b\": container with ID starting with e08803ed5d2003849a8a65282d4ae28ae7d625e760a65d394d4bebe67de99f7b not found: ID does not exist" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.673366 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/kube-rbac-proxy-frr/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.673670 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jptgp"] Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.780244 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/reloader/0.log" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.782303 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ec10d5-be61-406f-949a-38c5457d385a" path="/var/lib/kubelet/pods/74ec10d5-be61-406f-949a-38c5457d385a/volumes" Jan 30 10:01:35 crc kubenswrapper[4758]: I0130 10:01:35.920698 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kjhjn_0393e366-eeba-40c9-8020-9b16d0092dfd/frr-k8s-webhook-server/0.log" Jan 30 10:01:36 crc kubenswrapper[4758]: I0130 10:01:36.257455 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75c8688755-rr2t4_29744c9b-d424-4ac9-b224-fe0956166373/manager/0.log" Jan 30 10:01:36 crc kubenswrapper[4758]: I0130 10:01:36.323442 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7ffc5c558b-h88wr_79b7fbe5-4cdd-43d1-8bb4-71936b19eeb5/webhook-server/0.log" Jan 30 10:01:36 crc kubenswrapper[4758]: I0130 10:01:36.585735 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67xgc_10c38902-7117-4dc3-ad90-eb26dd9656de/kube-rbac-proxy/0.log" Jan 30 10:01:36 crc kubenswrapper[4758]: I0130 10:01:36.986438 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vfjq6_c8f7b5d4-29b4-4741-9bcf-a993dbbce575/frr/0.log" Jan 30 10:01:37 crc kubenswrapper[4758]: I0130 10:01:37.092557 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67xgc_10c38902-7117-4dc3-ad90-eb26dd9656de/speaker/0.log" Jan 30 10:01:51 crc kubenswrapper[4758]: I0130 10:01:51.800427 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/util/0.log" Jan 30 10:01:51 crc kubenswrapper[4758]: I0130 10:01:51.998201 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.038084 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.058835 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.233985 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.272198 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.347474 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9gg7r_749e159a-10a8-4704-a263-3ec389807647/extract/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.459692 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.697810 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.736987 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.758016 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.904737 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/util/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.907679 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/pull/0.log" Jan 30 10:01:52 crc kubenswrapper[4758]: I0130 10:01:52.945251 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7137qt6j_e581bd2a-bbb4-476e-821e-f55ba597f41e/extract/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.088028 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-utilities/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.275298 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-utilities/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.341781 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-content/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.346341 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-content/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.507844 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-content/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.515254 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/extract-utilities/0.log" Jan 30 10:01:53 crc kubenswrapper[4758]: I0130 10:01:53.817170 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-utilities/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.155820 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-content/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.162695 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7tq2j_73f8c779-64cc-4d7d-8762-4f8cf1611071/registry-server/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.181573 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-utilities/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.192919 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-content/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.344993 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-utilities/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.483721 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/extract-content/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.642341 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lsfmf_f34a2860-1860-4032-8f5d-9278338c1b19/marketplace-operator/0.log" Jan 30 10:01:54 crc kubenswrapper[4758]: I0130 10:01:54.932984 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-utilities/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.034658 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h6v7r_76812ff6-8f58-4c5c-8606-3cc8f949146e/registry-server/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.043638 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-content/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.043801 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-utilities/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.189281 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-content/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.442492 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-utilities/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.448863 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/extract-content/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.667859 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kb7fp_db6d53d3-8c72-4f16-9bf1-f196d3c85e3a/registry-server/0.log" Jan 30 10:01:55 crc kubenswrapper[4758]: I0130 10:01:55.777303 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-utilities/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.006508 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-utilities/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.011580 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-content/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.031417 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-content/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.264902 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-utilities/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.306896 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/extract-content/0.log" Jan 30 10:01:56 crc kubenswrapper[4758]: I0130 10:01:56.960403 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fmg9d_8404e227-68e2-4686-a04d-00048ba303ec/registry-server/0.log" Jan 30 10:02:22 crc kubenswrapper[4758]: I0130 10:02:22.387702 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:02:22 crc kubenswrapper[4758]: I0130 10:02:22.388245 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:02:52 crc kubenswrapper[4758]: I0130 10:02:52.387197 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:02:52 crc kubenswrapper[4758]: I0130 10:02:52.387873 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.387702 4758 patch_prober.go:28] interesting pod/machine-config-daemon-2nkwx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.389500 4758 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.389573 4758 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.390309 4758 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916"} pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:03:22 crc kubenswrapper[4758]: I0130 10:03:22.390366 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerName="machine-config-daemon" containerID="cri-o://74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" gracePeriod=600 Jan 30 10:03:22 crc kubenswrapper[4758]: E0130 10:03:22.521260 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:03:23 crc kubenswrapper[4758]: I0130 10:03:23.450854 4758 generic.go:334] "Generic (PLEG): container finished" podID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" exitCode=0 Jan 30 10:03:23 crc kubenswrapper[4758]: I0130 10:03:23.450908 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerDied","Data":"74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916"} Jan 30 10:03:23 crc kubenswrapper[4758]: I0130 10:03:23.451238 4758 scope.go:117] "RemoveContainer" containerID="5ec033c17193dd2eddbcd2a6076b6c2f50aead98129c8a6395cc816bd312b9c8" Jan 30 10:03:23 crc kubenswrapper[4758]: I0130 10:03:23.451825 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:03:23 crc kubenswrapper[4758]: E0130 10:03:23.452106 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:03:35 crc kubenswrapper[4758]: I0130 10:03:35.774757 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:03:35 crc kubenswrapper[4758]: E0130 10:03:35.775510 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.826210 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:41 crc kubenswrapper[4758]: E0130 10:03:41.827104 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" containerName="keystone-cron" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827118 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" containerName="keystone-cron" Jan 30 10:03:41 crc kubenswrapper[4758]: E0130 10:03:41.827141 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827149 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" Jan 30 10:03:41 crc kubenswrapper[4758]: E0130 10:03:41.827162 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="extract-content" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827169 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="extract-content" Jan 30 10:03:41 crc kubenswrapper[4758]: E0130 10:03:41.827192 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="extract-utilities" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827199 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="extract-utilities" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827408 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ec10d5-be61-406f-949a-38c5457d385a" containerName="registry-server" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.827435 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ce6ab6-0a95-4ee0-aa17-7eaa7cd78b28" containerName="keystone-cron" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.828734 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.836031 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.889452 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.889522 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.889545 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.991555 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.992009 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.992076 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.992328 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:41 crc kubenswrapper[4758]: I0130 10:03:41.992444 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:42 crc kubenswrapper[4758]: I0130 10:03:42.015238 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") pod \"certified-operators-vgrtm\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:42 crc kubenswrapper[4758]: I0130 10:03:42.147356 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:42 crc kubenswrapper[4758]: W0130 10:03:42.831685 4758 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d WatchSource:0}: Error finding container b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d: Status 404 returned error can't find the container with id b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d Jan 30 10:03:42 crc kubenswrapper[4758]: I0130 10:03:42.869206 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:43 crc kubenswrapper[4758]: I0130 10:03:43.638189 4758 generic.go:334] "Generic (PLEG): container finished" podID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerID="fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859" exitCode=0 Jan 30 10:03:43 crc kubenswrapper[4758]: I0130 10:03:43.638361 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerDied","Data":"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859"} Jan 30 10:03:43 crc kubenswrapper[4758]: I0130 10:03:43.638486 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerStarted","Data":"b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d"} Jan 30 10:03:43 crc kubenswrapper[4758]: I0130 10:03:43.641540 4758 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:03:44 crc kubenswrapper[4758]: I0130 10:03:44.647650 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerStarted","Data":"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21"} Jan 30 10:03:45 crc kubenswrapper[4758]: I0130 10:03:45.657865 4758 generic.go:334] "Generic (PLEG): container finished" podID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerID="8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21" exitCode=0 Jan 30 10:03:45 crc kubenswrapper[4758]: I0130 10:03:45.658098 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerDied","Data":"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21"} Jan 30 10:03:46 crc kubenswrapper[4758]: I0130 10:03:46.669211 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerStarted","Data":"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112"} Jan 30 10:03:46 crc kubenswrapper[4758]: I0130 10:03:46.694535 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vgrtm" podStartSLOduration=3.193368818 podStartE2EDuration="5.694517899s" podCreationTimestamp="2026-01-30 10:03:41 +0000 UTC" firstStartedPulling="2026-01-30 10:03:43.641332358 +0000 UTC m=+5628.613643909" lastFinishedPulling="2026-01-30 10:03:46.142481439 +0000 UTC m=+5631.114792990" observedRunningTime="2026-01-30 10:03:46.689007996 +0000 UTC m=+5631.661319557" watchObservedRunningTime="2026-01-30 10:03:46.694517899 +0000 UTC m=+5631.666829450" Jan 30 10:03:50 crc kubenswrapper[4758]: I0130 10:03:50.768378 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:03:50 crc kubenswrapper[4758]: E0130 10:03:50.768875 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.149335 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.149665 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.193841 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.774320 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:52 crc kubenswrapper[4758]: I0130 10:03:52.817347 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:54 crc kubenswrapper[4758]: I0130 10:03:54.750053 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vgrtm" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="registry-server" containerID="cri-o://a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" gracePeriod=2 Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.204859 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.260412 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") pod \"7ff04746-69db-463d-8b3e-23b69fa6a995\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.260765 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") pod \"7ff04746-69db-463d-8b3e-23b69fa6a995\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.261264 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities" (OuterVolumeSpecName: "utilities") pod "7ff04746-69db-463d-8b3e-23b69fa6a995" (UID: "7ff04746-69db-463d-8b3e-23b69fa6a995"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.282118 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") pod \"7ff04746-69db-463d-8b3e-23b69fa6a995\" (UID: \"7ff04746-69db-463d-8b3e-23b69fa6a995\") " Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.288382 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.306314 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks" (OuterVolumeSpecName: "kube-api-access-t5dks") pod "7ff04746-69db-463d-8b3e-23b69fa6a995" (UID: "7ff04746-69db-463d-8b3e-23b69fa6a995"). InnerVolumeSpecName "kube-api-access-t5dks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.390382 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5dks\" (UniqueName: \"kubernetes.io/projected/7ff04746-69db-463d-8b3e-23b69fa6a995-kube-api-access-t5dks\") on node \"crc\" DevicePath \"\"" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.435988 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ff04746-69db-463d-8b3e-23b69fa6a995" (UID: "7ff04746-69db-463d-8b3e-23b69fa6a995"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.492400 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff04746-69db-463d-8b3e-23b69fa6a995-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.760938 4758 generic.go:334] "Generic (PLEG): container finished" podID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerID="a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" exitCode=0 Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.760989 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerDied","Data":"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112"} Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.761013 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgrtm" event={"ID":"7ff04746-69db-463d-8b3e-23b69fa6a995","Type":"ContainerDied","Data":"b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d"} Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.761030 4758 scope.go:117] "RemoveContainer" containerID="a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.761171 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgrtm" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.796322 4758 scope.go:117] "RemoveContainer" containerID="8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.851199 4758 scope.go:117] "RemoveContainer" containerID="fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.908388 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.921338 4758 scope.go:117] "RemoveContainer" containerID="a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" Jan 30 10:03:55 crc kubenswrapper[4758]: E0130 10:03:55.923552 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112\": container with ID starting with a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112 not found: ID does not exist" containerID="a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.923603 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112"} err="failed to get container status \"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112\": rpc error: code = NotFound desc = could not find container \"a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112\": container with ID starting with a6902b4890a449c6f40ae3ab4d5d1fa15e6c9f557e09fddaac7b0c2111edb112 not found: ID does not exist" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.923649 4758 scope.go:117] "RemoveContainer" containerID="8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21" Jan 30 10:03:55 crc kubenswrapper[4758]: E0130 10:03:55.924835 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21\": container with ID starting with 8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21 not found: ID does not exist" containerID="8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.924864 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21"} err="failed to get container status \"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21\": rpc error: code = NotFound desc = could not find container \"8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21\": container with ID starting with 8593a74cec79601c21ed4a60afb2c672268980758f7340e095c834921aea5b21 not found: ID does not exist" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.924882 4758 scope.go:117] "RemoveContainer" containerID="fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859" Jan 30 10:03:55 crc kubenswrapper[4758]: E0130 10:03:55.925467 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859\": container with ID starting with fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859 not found: ID does not exist" containerID="fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.925489 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859"} err="failed to get container status \"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859\": rpc error: code = NotFound desc = could not find container \"fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859\": container with ID starting with fd788d21bc36e98a61ad0355efcd67e17306b0d067102287cdabf73a8a512859 not found: ID does not exist" Jan 30 10:03:55 crc kubenswrapper[4758]: I0130 10:03:55.926817 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vgrtm"] Jan 30 10:03:57 crc kubenswrapper[4758]: I0130 10:03:57.781793 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" path="/var/lib/kubelet/pods/7ff04746-69db-463d-8b3e-23b69fa6a995/volumes" Jan 30 10:04:01 crc kubenswrapper[4758]: E0130 10:04:01.365120 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:01 crc kubenswrapper[4758]: I0130 10:04:01.769605 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:01 crc kubenswrapper[4758]: E0130 10:04:01.769930 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:04:11 crc kubenswrapper[4758]: E0130 10:04:11.613334 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:11 crc kubenswrapper[4758]: I0130 10:04:11.838131 4758 scope.go:117] "RemoveContainer" containerID="5ec49f03ce30f39c620b596bdc2dedc28bf6a7cfab662959f3b4adf09c9a537d" Jan 30 10:04:15 crc kubenswrapper[4758]: I0130 10:04:15.774978 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:15 crc kubenswrapper[4758]: E0130 10:04:15.777369 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:04:21 crc kubenswrapper[4758]: E0130 10:04:21.863535 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:27 crc kubenswrapper[4758]: I0130 10:04:27.771546 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:27 crc kubenswrapper[4758]: E0130 10:04:27.772299 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:04:32 crc kubenswrapper[4758]: E0130 10:04:32.115748 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:40 crc kubenswrapper[4758]: I0130 10:04:40.769196 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:40 crc kubenswrapper[4758]: E0130 10:04:40.769958 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:04:42 crc kubenswrapper[4758]: E0130 10:04:42.371448 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:52 crc kubenswrapper[4758]: E0130 10:04:52.604510 4758 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff04746_69db_463d_8b3e_23b69fa6a995.slice/crio-b725b9032b6c5b3d1fe0c3246f5100a2c6f05378d6d3320b5c714621b0b4bd0d\": RecentStats: unable to find data in memory cache]" Jan 30 10:04:52 crc kubenswrapper[4758]: I0130 10:04:52.769092 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:04:52 crc kubenswrapper[4758]: E0130 10:04:52.769351 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:03 crc kubenswrapper[4758]: I0130 10:05:03.768296 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:03 crc kubenswrapper[4758]: E0130 10:05:03.769014 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:11 crc kubenswrapper[4758]: I0130 10:05:11.904136 4758 scope.go:117] "RemoveContainer" containerID="d5124596fab9753fd484ec6efd38bf02b9c0c9ce543d442c2f18b786111542c3" Jan 30 10:05:13 crc kubenswrapper[4758]: I0130 10:05:13.482928 4758 generic.go:334] "Generic (PLEG): container finished" podID="bec5515f-517d-441a-8d27-381128c9cbe3" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" exitCode=0 Jan 30 10:05:13 crc kubenswrapper[4758]: I0130 10:05:13.483429 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lpqrm/must-gather-qpckg" event={"ID":"bec5515f-517d-441a-8d27-381128c9cbe3","Type":"ContainerDied","Data":"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee"} Jan 30 10:05:13 crc kubenswrapper[4758]: I0130 10:05:13.484096 4758 scope.go:117] "RemoveContainer" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" Jan 30 10:05:13 crc kubenswrapper[4758]: I0130 10:05:13.888221 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpqrm_must-gather-qpckg_bec5515f-517d-441a-8d27-381128c9cbe3/gather/0.log" Jan 30 10:05:16 crc kubenswrapper[4758]: I0130 10:05:16.768779 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:16 crc kubenswrapper[4758]: E0130 10:05:16.769392 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:21 crc kubenswrapper[4758]: I0130 10:05:21.971123 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 10:05:21 crc kubenswrapper[4758]: I0130 10:05:21.972006 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lpqrm/must-gather-qpckg" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="copy" containerID="cri-o://5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" gracePeriod=2 Jan 30 10:05:21 crc kubenswrapper[4758]: I0130 10:05:21.989227 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lpqrm/must-gather-qpckg"] Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.520111 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpqrm_must-gather-qpckg_bec5515f-517d-441a-8d27-381128c9cbe3/copy/0.log" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.521109 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.578913 4758 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lpqrm_must-gather-qpckg_bec5515f-517d-441a-8d27-381128c9cbe3/copy/0.log" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.579697 4758 generic.go:334] "Generic (PLEG): container finished" podID="bec5515f-517d-441a-8d27-381128c9cbe3" containerID="5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" exitCode=143 Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.579742 4758 scope.go:117] "RemoveContainer" containerID="5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.579855 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lpqrm/must-gather-qpckg" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.609575 4758 scope.go:117] "RemoveContainer" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.630092 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") pod \"bec5515f-517d-441a-8d27-381128c9cbe3\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.630183 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") pod \"bec5515f-517d-441a-8d27-381128c9cbe3\" (UID: \"bec5515f-517d-441a-8d27-381128c9cbe3\") " Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.637481 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q" (OuterVolumeSpecName: "kube-api-access-b7d4q") pod "bec5515f-517d-441a-8d27-381128c9cbe3" (UID: "bec5515f-517d-441a-8d27-381128c9cbe3"). InnerVolumeSpecName "kube-api-access-b7d4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.680722 4758 scope.go:117] "RemoveContainer" containerID="5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" Jan 30 10:05:22 crc kubenswrapper[4758]: E0130 10:05:22.681219 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec\": container with ID starting with 5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec not found: ID does not exist" containerID="5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.681248 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec"} err="failed to get container status \"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec\": rpc error: code = NotFound desc = could not find container \"5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec\": container with ID starting with 5209b147165a07dff14659dc511d63e5d3ace8dd49f71e7a4665f490bfc854ec not found: ID does not exist" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.681266 4758 scope.go:117] "RemoveContainer" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" Jan 30 10:05:22 crc kubenswrapper[4758]: E0130 10:05:22.684081 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee\": container with ID starting with a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee not found: ID does not exist" containerID="a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.684114 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee"} err="failed to get container status \"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee\": rpc error: code = NotFound desc = could not find container \"a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee\": container with ID starting with a2c1c0bc4cac9380e61d531d0e78407e1892b197585319f8b8f0eb982e0601ee not found: ID does not exist" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.732698 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7d4q\" (UniqueName: \"kubernetes.io/projected/bec5515f-517d-441a-8d27-381128c9cbe3-kube-api-access-b7d4q\") on node \"crc\" DevicePath \"\"" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.842358 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bec5515f-517d-441a-8d27-381128c9cbe3" (UID: "bec5515f-517d-441a-8d27-381128c9cbe3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:05:22 crc kubenswrapper[4758]: I0130 10:05:22.938750 4758 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bec5515f-517d-441a-8d27-381128c9cbe3-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 10:05:23 crc kubenswrapper[4758]: I0130 10:05:23.785427 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" path="/var/lib/kubelet/pods/bec5515f-517d-441a-8d27-381128c9cbe3/volumes" Jan 30 10:05:28 crc kubenswrapper[4758]: I0130 10:05:28.768605 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:28 crc kubenswrapper[4758]: E0130 10:05:28.769350 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:42 crc kubenswrapper[4758]: I0130 10:05:42.769525 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:42 crc kubenswrapper[4758]: E0130 10:05:42.770942 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:05:57 crc kubenswrapper[4758]: I0130 10:05:57.768830 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:05:57 crc kubenswrapper[4758]: E0130 10:05:57.769770 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:11 crc kubenswrapper[4758]: I0130 10:06:11.768950 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:06:11 crc kubenswrapper[4758]: E0130 10:06:11.769882 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:25 crc kubenswrapper[4758]: I0130 10:06:25.774358 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:06:25 crc kubenswrapper[4758]: E0130 10:06:25.775199 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:36 crc kubenswrapper[4758]: I0130 10:06:36.769063 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:06:36 crc kubenswrapper[4758]: E0130 10:06:36.770848 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.867605 4758 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.868993 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="registry-server" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869010 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="registry-server" Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.869051 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="copy" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869060 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="copy" Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.869075 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="extract-utilities" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869085 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="extract-utilities" Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.869096 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="gather" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869104 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="gather" Jan 30 10:06:47 crc kubenswrapper[4758]: E0130 10:06:47.869127 4758 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="extract-content" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.869134 4758 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="extract-content" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.896018 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="gather" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.896131 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec5515f-517d-441a-8d27-381128c9cbe3" containerName="copy" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.896197 4758 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff04746-69db-463d-8b3e-23b69fa6a995" containerName="registry-server" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.898553 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.905790 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.952225 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.952275 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:47 crc kubenswrapper[4758]: I0130 10:06:47.952307 4758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.054620 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.054682 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.054722 4758 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.055321 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.055448 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.078773 4758 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") pod \"community-operators-ljdcd\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.226844 4758 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:48 crc kubenswrapper[4758]: I0130 10:06:48.587517 4758 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:06:49 crc kubenswrapper[4758]: I0130 10:06:49.423662 4758 generic.go:334] "Generic (PLEG): container finished" podID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" containerID="b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3" exitCode=0 Jan 30 10:06:49 crc kubenswrapper[4758]: I0130 10:06:49.423876 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerDied","Data":"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3"} Jan 30 10:06:49 crc kubenswrapper[4758]: I0130 10:06:49.423913 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerStarted","Data":"4d5d1d421b45d8382306b2113710c6bccea1f4dd16f411ccd63e1d217fcd85c4"} Jan 30 10:06:50 crc kubenswrapper[4758]: I0130 10:06:50.768129 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:06:50 crc kubenswrapper[4758]: E0130 10:06:50.768364 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:06:51 crc kubenswrapper[4758]: I0130 10:06:51.443595 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerStarted","Data":"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820"} Jan 30 10:06:53 crc kubenswrapper[4758]: I0130 10:06:53.463847 4758 generic.go:334] "Generic (PLEG): container finished" podID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" containerID="22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820" exitCode=0 Jan 30 10:06:53 crc kubenswrapper[4758]: I0130 10:06:53.463915 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerDied","Data":"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820"} Jan 30 10:06:54 crc kubenswrapper[4758]: I0130 10:06:54.475961 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerStarted","Data":"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4"} Jan 30 10:06:54 crc kubenswrapper[4758]: I0130 10:06:54.502741 4758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljdcd" podStartSLOduration=3.053154988 podStartE2EDuration="7.502724725s" podCreationTimestamp="2026-01-30 10:06:47 +0000 UTC" firstStartedPulling="2026-01-30 10:06:49.425628595 +0000 UTC m=+5814.397940146" lastFinishedPulling="2026-01-30 10:06:53.875198332 +0000 UTC m=+5818.847509883" observedRunningTime="2026-01-30 10:06:54.497136449 +0000 UTC m=+5819.469448000" watchObservedRunningTime="2026-01-30 10:06:54.502724725 +0000 UTC m=+5819.475036276" Jan 30 10:06:58 crc kubenswrapper[4758]: I0130 10:06:58.227073 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:58 crc kubenswrapper[4758]: I0130 10:06:58.227675 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:06:58 crc kubenswrapper[4758]: I0130 10:06:58.280914 4758 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:07:04 crc kubenswrapper[4758]: I0130 10:07:04.768994 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:04 crc kubenswrapper[4758]: E0130 10:07:04.769560 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:07:08 crc kubenswrapper[4758]: I0130 10:07:08.289836 4758 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:07:08 crc kubenswrapper[4758]: I0130 10:07:08.344305 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:07:08 crc kubenswrapper[4758]: I0130 10:07:08.612991 4758 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ljdcd" podUID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" containerName="registry-server" containerID="cri-o://9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" gracePeriod=2 Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.101744 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.254163 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") pod \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.254628 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") pod \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.254737 4758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") pod \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\" (UID: \"b5682a35-e6fb-4ecf-883c-796b4d1988c0\") " Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.255813 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities" (OuterVolumeSpecName: "utilities") pod "b5682a35-e6fb-4ecf-883c-796b4d1988c0" (UID: "b5682a35-e6fb-4ecf-883c-796b4d1988c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.261402 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m" (OuterVolumeSpecName: "kube-api-access-pqs8m") pod "b5682a35-e6fb-4ecf-883c-796b4d1988c0" (UID: "b5682a35-e6fb-4ecf-883c-796b4d1988c0"). InnerVolumeSpecName "kube-api-access-pqs8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.319230 4758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5682a35-e6fb-4ecf-883c-796b4d1988c0" (UID: "b5682a35-e6fb-4ecf-883c-796b4d1988c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.357133 4758 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.357166 4758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqs8m\" (UniqueName: \"kubernetes.io/projected/b5682a35-e6fb-4ecf-883c-796b4d1988c0-kube-api-access-pqs8m\") on node \"crc\" DevicePath \"\"" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.357177 4758 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5682a35-e6fb-4ecf-883c-796b4d1988c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623556 4758 generic.go:334] "Generic (PLEG): container finished" podID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" containerID="9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" exitCode=0 Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623602 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerDied","Data":"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4"} Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623632 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljdcd" event={"ID":"b5682a35-e6fb-4ecf-883c-796b4d1988c0","Type":"ContainerDied","Data":"4d5d1d421b45d8382306b2113710c6bccea1f4dd16f411ccd63e1d217fcd85c4"} Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623643 4758 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljdcd" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.623651 4758 scope.go:117] "RemoveContainer" containerID="9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.642669 4758 scope.go:117] "RemoveContainer" containerID="22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.663029 4758 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.673415 4758 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ljdcd"] Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.678769 4758 scope.go:117] "RemoveContainer" containerID="b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.726238 4758 scope.go:117] "RemoveContainer" containerID="9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" Jan 30 10:07:09 crc kubenswrapper[4758]: E0130 10:07:09.726758 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4\": container with ID starting with 9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4 not found: ID does not exist" containerID="9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.726807 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4"} err="failed to get container status \"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4\": rpc error: code = NotFound desc = could not find container \"9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4\": container with ID starting with 9f882ee11468bdb9a696bc3d8705d25c75d2d5e7651fe7fc12c0d6982ac6f2a4 not found: ID does not exist" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.726833 4758 scope.go:117] "RemoveContainer" containerID="22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820" Jan 30 10:07:09 crc kubenswrapper[4758]: E0130 10:07:09.727176 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820\": container with ID starting with 22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820 not found: ID does not exist" containerID="22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.727304 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820"} err="failed to get container status \"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820\": rpc error: code = NotFound desc = could not find container \"22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820\": container with ID starting with 22524c43cf582480e906b2487109ac20f54fc75e9a23279f8c06bb137e021820 not found: ID does not exist" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.727421 4758 scope.go:117] "RemoveContainer" containerID="b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3" Jan 30 10:07:09 crc kubenswrapper[4758]: E0130 10:07:09.727772 4758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3\": container with ID starting with b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3 not found: ID does not exist" containerID="b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.728049 4758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3"} err="failed to get container status \"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3\": rpc error: code = NotFound desc = could not find container \"b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3\": container with ID starting with b87cdc9710c1cccfd08e7e79196b07de4219c79008dce3c309d5bab10a3c47c3 not found: ID does not exist" Jan 30 10:07:09 crc kubenswrapper[4758]: I0130 10:07:09.777611 4758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5682a35-e6fb-4ecf-883c-796b4d1988c0" path="/var/lib/kubelet/pods/b5682a35-e6fb-4ecf-883c-796b4d1988c0/volumes" Jan 30 10:07:18 crc kubenswrapper[4758]: I0130 10:07:18.768562 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:18 crc kubenswrapper[4758]: E0130 10:07:18.769686 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:07:31 crc kubenswrapper[4758]: I0130 10:07:31.769066 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:31 crc kubenswrapper[4758]: E0130 10:07:31.769965 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:07:44 crc kubenswrapper[4758]: I0130 10:07:44.768812 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:44 crc kubenswrapper[4758]: E0130 10:07:44.770712 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:07:59 crc kubenswrapper[4758]: I0130 10:07:59.768437 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:07:59 crc kubenswrapper[4758]: E0130 10:07:59.769284 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:08:14 crc kubenswrapper[4758]: I0130 10:08:14.768463 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:08:14 crc kubenswrapper[4758]: E0130 10:08:14.769411 4758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2nkwx_openshift-machine-config-operator(95cfcde3-10c8-4ece-a78a-9508f04a0f09)\"" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" podUID="95cfcde3-10c8-4ece-a78a-9508f04a0f09" Jan 30 10:08:26 crc kubenswrapper[4758]: I0130 10:08:26.769320 4758 scope.go:117] "RemoveContainer" containerID="74a942ee4c63cb844ae3db18989899550b34d449f52e8b16bff86f2ebf9fb916" Jan 30 10:08:27 crc kubenswrapper[4758]: I0130 10:08:27.266259 4758 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2nkwx" event={"ID":"95cfcde3-10c8-4ece-a78a-9508f04a0f09","Type":"ContainerStarted","Data":"e6183904969d375ed1d6878bb833f4097e19af4109cfd0f6282a50a22b32e986"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137101620024441 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137101621017357 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137065406016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137065406015464 5ustar corecore